Aurelio Smith’s Analysis of Active Information
April 30, 2015 | Posted by Winston Ewert under Conservation of Information, Intelligent Design |
Recently, Aurelio Smith had a guest publication here at Uncommon Descent entitled Signal to Noise: A Critical Analysis of Active Information. Most of the post is taken up by a recounting of the history of active information. He also quotes the criticisms of Felsentein and English which have responded to at Evolution News and Views: These Critics of Intelligent Design Agree with Us More Than They Seem to Realize. Smith then does spend a few paragraphs developing his own objections to active information.
Smith argues that viewing evolution as a search is incorrect, because organisms/individuals aren’t searching, they are being acted upon by the environment:
Individual organisms or populations are not searching for optimal solutions to the task of survival. Organisms are passive in the process, merely affording themselves of the opportunity that existing and new niche environments provide. If anything is designing, it is the environment. I could suggest an anthropomorphism: the environment and its effects on the change in allele frequency are “a voice in the sky” whispering “warmer” or “colder”.
When we say search we simply mean a process that can be modeled as a probability distribution. Smith’s concern is irrelevent to that question. However, even if we are trying to model evolution as a optimization or solution-search problem Smith’s objection doesn’t make any sense. The objects of a search are always passive in the search. Objecting that the organisms aren’t searching is akin to objecting that easter eggs don’t find themselves. That’s not how any kind of search works. All search is the environment acting on the objects in the search.
Rather than demonstrating the “active information” in Dawkins’ Weasel program, which Dawkins freely confirmed is a poor model for evolution with its targeted search, would DEM like to look at Wright’s paper for a more realistic evolutionary model?
This is a rather strange comment. Smith quoted our discussion of Avida previously. But here he implies that we’ve only ever discussed Dawkin’s Weasel program. We’ve discussed Avida, Ev, Steiner Trees, and Metabiology. True, we haven’t looked at Wright’s paper, but its completely unreasonable to suggest that we’ve only discussed Dawkin’s “poor model.”
Secondly, “fitness landscape” models are not accurate representations of the chaotic, fluid, interactive nature of the real environment . The environment is a kaleidoscope of constant change. Fitness peaks can erode and erupt.
It is true that a static fitness landscape is an insufficient model for biology. That is why our work on conservation of information does not assume a static fitness landscape. Our model is deliberately general enough to handle any kind of feedback mechanism.
While I’m grateful for Smith taking the time to writeup his discussion, I find it very confused. The objections he raises don’t make any sense.
243 Responses to Aurelio Smith’s Analysis of Active Information
Leave a Reply
You must be logged in to post a comment.
Wierd statement: “fitness landscape”. I have never heard an IDer speak of “fitness landscape” except with the preamble of “dynamic”. “Dynamic fitness landscape”, ie, a landscape that is “chaotic, fluid … a kaleidoscope of constant change.”
I don’t think evolution cannot be modeled as a probability distribution.
Imagine a dart board in a dark room into which players throw darts. They throw the darts but without a target in sight to aim for.
If the dart board does not move, you should be able to model the probable distribution of the darts, but if it does move, how do you factor in the movement of the unseen board?
Most importantly, there is still a winner who has had no idea of the target’s position.
I’m grateful to Dr. Ewert for taking time to respond to my hastily written post. I’d enjoy exchanging views on whether evolutionary processes are searches and whether criticizing models that represent evolution as a search is an effective argument against evolution but I’m somewhat handicapped by having my IP blocked which means I have to use a VPN to bypass it and long comments hang. I’ve lost several. Perhaps someone could whitelist my IP. You can contact me via the email I registered with.
Thanks in advance.
With Intelligent Design Evolution evolutionary processes are searches, actual active searches. With unguided evolution evolutionary processes are passive and if they happen upon a benefit, then so be it. All is well until they stumble upon whatever can eliminate them.
Nature tends to the most simple. It peels away the unnecessary and leaves what it cannot peel away, or has not peeled away yet. IOW nature searches for the simplest solution. It doesn’t stumble upon, nor can it build via accumulation, the information required for basic biological reproduction: The cell division processes required for bacterial life- living organisms are irreducibly complex all the way down.
I didn’t know that. Is there reason to believe that?
In any case, what does that have to do with Intelligent Design? Is Nature the designer?
The reason is it always takes the line of least resistance. It can produce stones, even piles of stones but not Stonehenges.
What it has to do with ID is that nature couldn’t be the designer.
The reason is it always takes the line of least resistance.
Always? How can anyone possibly know that? You’d have to examine every possible situation to claim that.
So? What does a human construction have to do with Nature’s inherent capabilities?
WHY NOT?
Yes, always, so far. Just as structures like Stonehenge will always require an intelligent designer.
Why not? For one there isn’t any evidence that it can be the designer. All observations and experience argue against it.
Human construction shows what requires intelligent agencies to produce. It also shows nature’s limitations.
For example see- Chase W. Nelson and John C. Sanford, The effects of low-impact mutations in digital organisms, Theoretical Biology and Medical Modelling, 2011, 8:9 | doi:10.1186/1742-4682-8-9
Joe,Daniel, if you please…,
Bees, Beavers, Humans.
Honeycomb, Dam, Stonehedge.
Natural Design all? Human transcend Nature?
True. Structures that human beings construct are constructed by human beings.
What does that have to do with “Nature always taking the line of least resistance” or Nature not being the designer of living organisms?
In regards to
“nature couldn’t be the designer.”
Daniel King asks
“WHY NOT?”
well, for a few examples, because,,,
Mr. King, you are certainly free to believe that unguided material processes can create all that unfathomable complexity, (since you, contrary to your materialistic belief system, actually do have free will to choose what you believe is true), but I certainly don’t find your blind faith in unguided material processes persuasive! Especially since no one has ever witnessed unguided material processes produce non-trivial functional information/complexity:
Human construction tells us only what humans can produce. How does that show Nature’s limitations?
As we have Dr. Ewert’s attention I would love to hear his response to the problems I raised in a comment on AS’s post. I have repeated it here with a bit more detail.
Converting probabilities to their logs sometimes blinds us to the fact they are probabilities. So active information is defined as:
So active information = endogenous information – exogenous information
which is another way of expressing the ratio of two probabilities:
p = prob(success|blind search)
and
q – prob(success|alternative search)
But somehow this ratio p/q gets equated to the probability of the alternative search happening.
To do this requires:
1) Treating possible searches as a random variable
2) Selecting a way of enumerating possible searches (e.g. a “search” is defined as an ordered subset of all the variables to be inspected, so the set of searches is all possible ordered subsets)
3) Using Bernouilli’s principle of indifference to decide all searches are equally probable within this space of all possible searches
All of this seems to be assumed in your work rather than made explicit and when made explicit raises some rather fundamental questions. What is the probability distribution of searches? There are many ways of enumerating searches – how do you justify your choice? On what basis do you assume each one is equally probable?
Winston Ewert writes:
Is there a process that springs to mind that cannot be modeled as a probability distribution? This is taking the path of defining something so broadly that “search” means “anything”.
My real concern lies elsewhere but we can come back to that if need be.
This may be my fault. I’m interested in the biology. If the math (of which I freely admit I’m not good in) is a useful tool in understanding biological processes, all well and good. But you appear to be missing the mark in describing the biology. (Birds at ENV, for example)
Excellent! Glad we agree on that.
PS I’ll try breaking comments into chunks and see if they’ll post. Hope someone is looking into that IP glitch.
Winston Ewert writes:
I suggested a look at Sewall Wright’s paper as his approach is a classic attempt to describe gene combinations as a fitness landscape. He does not talk of environments as “landscapes”. Not you, perhaps, but many commenters in the preceding thread have become confused over maps, territories, islands, needles and haystacks.
My apologies if I implied that you had worked only on the Weasel model. I don’t think that and I see that you mention other computer models. In fact, I should have mentioned Conway’s Game of Life. This would be another example of a mathematical model that has been presented by some as modeling evolution. I’m sure you will agree this is far from the case.
Agreed. I’d go further. If you model evolution in a truly static fitness landscape, there will be no evolution.
Are you referring to “active information”? How does the idea of “the difference between the endogenous and exogenous information” help to address the dynamic, shifting interplay between a population of organisms and its niche?
Aside to admins:
Posting in short bursts during unsocial hours when the VPN seems to hang less is not convenient. It is especially frustrating when posting links. Could someone look into clearing the block on my home IP?
You aren’t deliberately trying to handicap critics on this site, I’m sure.
Thanks in advance.
@ Winston Ewert,
I hope you get chance to respond to Mark Frank’s comment 14, too.
Aurelio Smith has already pointed out the problem with this, but to put some specifics on it, under this definition, rolling a die would be a search. Indeed, if you don’t make artificial restrictions on what you mean by a probability distribution, rolling a die with 6 4′s would be a search. As would diffusion, if you want to look at something dynamic in time.
@ Daniel King
Bob O’H,
“Under this definition, rolling a die would be a search. Indeed, if you don’t make artificial restrictions on what you mean by a probability distribution, rolling a die with 6 4?s would be a search”.
I’m not sure that this is a good analogy.
When rolling a die, you ARE doing a search in a sense, you are searching for any number between one and six (depending on the die!). What else would you be rolling a die for?
Not to mention the actual intelligently designed dice that has to be deliberately rolled to achieve a result.
Daniel:
Nature cannot produce Stonehenges, Daniel. Forensics, archaeology and SETI all rely on our knowledge of cause and effect relationships. Demonstrate that nature can produce something and we cannot say some intelligent agency was required to do it.
I would love to see someone demonstrate how this “game of life” models unguided evolution. I know it won’t happen but it would be nice to see an evo put its money where its mouth is.
And something else tat is very strange- Evos, if they had something, wouldn’t bother with Winston’s paper nor response. They would just present the evidence that demonstrates the power of unguided evolution. They would show us how it is operationalized. They would show us its entailments and its power. They would model it.
However they don’t even try. It’s as if they know they have nothing but to attack ID. Yet attacking ID will never provide support for their claims.
Mark Frank:
Can you please show us where that is in the paper?
There needs to be a “James Randi test” for evolutionism…
#26 Joe
Me:
Joe:
To repeat AS’s quote and link with my emphasis
From A General Theory of Information Cost Incurred by Successful Search
Thanks Mark- I was looking in the “Active Information” paper- ie the wrong paper.
logically_speaking – that wasn’t an analogy, it was a direct consequence of the definition!
TBH, I think you are stretching the definition of a search – a search is for something, which is a subset of everything being searched. So, rolling a die to “search” for a number from 1 to 6 is bizarre, as the ‘search’ will always be successful first time around. So, in what sense is it a search, rather than a RNG?
Bob O’H: Context. In a game where dice are used, the value on a toss will feed an outcome, and such an outcome may shape onward steps etc. E.g. sarting at a random location, I can use dice tosses to guide steps in a random walk. E.g. Red 123 that many steps backwards, 456 123 steps forward, and Green, similar but left right. This would explore a space and constitutes a search esp if there is a reward function based on where one lands. Thus, a prob distribution can be integral to or tantamount to a search. So, the partly blind chance driven search of a config space makes sense. And indeed in the introduction to the paper such a context is explored via a drone over a field of cups covering items. (A picture with hex packed pills is used to illustrate). The basic point is that we have a reference search, take a flat random sample or a random walk (maybe with drift) etc. As we are under needle in haystack blind search circumstances, the target zones are maximally unlikely, and the other options are samples with a bias. But the blindness extends to the search for a golden or at least good search that plunks us down next to a target zone. That comes from higher order space. If W possibilities are there, direct, the searches as samples come from a set of 2^W possibilities making S4S )and higher order yet searches) plausibly progressively harder. So if a search drastically outperforms flat random, it is reasonable to see that it was not blindly chosen and/or does not act blindly. From this cap to be bridged we may infer info conveying an advantage, active info. And the degree of effect relative to a flat random blind search is reasonable as a metric. And, the information can be put in probabilistic terms. Cf here: http://www.uncommondescent.com.....formation/ KF
kf – in a game of dice a search metaphor makes sense, but according to Ewert just rolling a die 8for whatever reason9 is a search.
Bob O’H (& attn MF):
Ewert spoke in a context, with three initial background sections for context in a 40+ pp paper. That context from outset is blind, needle in haystack search and the probability distributions relate to taking searches, which are samples of config spaces. And in particular, blind samples.
Notice how the main body opens:
So, whatever infelicities of expression you may see or may think you see, that controlling context should be borne in mind.
The probability distributions are in effect ways to address degrees of bias in samples, including samples based on an incremental search as is defined with reference to the search matrix which builds in a next step process.
The issue is, how do evolutionary type searches outperform the yardstick, flat random sample blind needle in haystack search. The answer is, by input active information, such as obtains with say a warmer/colder signal pattern.
In that broad context, different search strategies are effectively the same as differing probability distributions affecting sample choices.
This also points to the case of search for a golden search, which puts you down on a target zone. Higher order searches for good searches are going to challenge you so that they will not — if blind — be likely to hand you a golden search.
And on the strong statistical constraints imposed by the needle in haystack situation, it is reasonable to look at a v likely to succeed strategy that finds a way to add in info that guides search making the otherwise infeasible feasible that the performance gap is a measure of injected, bridging, active information.
KF
kf – if Ewert meant his definition within a context then hopefully he’ll clarify it here in the comments. As it is, his statement seems pretty unambiguous, with no suggestion that he means it within a certain context. I hope he’ll do this – it would be good to know precisely what he means by a search.
We don’t actually assume a uniform distribution. The contribution of “A General Theory of Information Cost Incurred by Successful Search” is to show that conservation of information still applies under a non-uniform initial distribution.
The conclusion of conservation of information is that in order to produce complex life, the initial distribution of the universe must have been configured in such a way as to increase the probability of producing complex life.
So?
I’m sure there is some merit in looking at Wright’s paper. But is he really doing anything that hasn’t been repeated in computer models?
Many computer models have evolution do indeed model a static fitness landscape and do in fact experience evolution of a sort. So either your prediction is utterly incorrect, or I’ve not understood it.
What I’m saying is that conservation of information merely requires that your search be a probability distribution. Your dynamic shifting process is still modelable as a probability distribution.
Indeed, all of those are searches.
Great to hear from you, Dr Ewert,
you write:
So, we’re right in reading DEM (yourself in association with Dembski and Marks) as “search” is synonymous with “probability distribution?
The paper was published in 1932 but it is prescient. It’s not a long paper, only 11 pages but packed with ideas.
In a static fitness landscape, once the allele frequency has fixed, evolution will slow effectively to nothing. Models that show evolution producing change in a truly static environment are not meaningful models of evolution in my view.
Here we get to a point of concern. Whether there is any justification for “conservation of information”.
That’s very clear, thanks.
Winston #35
Which pdf are we talking about?. You identify a search with a pdf and your paper appears to show that the LCI holds even when that pdf is not uniform. But that is not my point. You conclude that finding a more efficient search has an “information cost” which seems to be identified with a probability of finding that more efficient search i.e. the changes of success in the search for the search. This implies you must have some kind of pdf in mind for space of all possible searches. That is the pdf I am questioning. Otherwise the pdf might be simply zero probability of all searches that are less efficient than the improved search – which would certainly scupper the LCI.
Nowhere can I find an explicit explanation of the pdf of possible searches although I think you are assuming each of those matrices which identify a search are equally probable.
Bob O’H: All I am doing is pointing out the actual controlling context which the authors have a right to assume will be taken into account in reading. Text out of context = pretext is a classic problem of interpretation. KF
Bob O’H:
Let me follow up by clipping the opening words, verbatim:
>> 1. The Search Matrix
All but the most trivial searches are needle-in-the-haystack problems. Yet many searches successfully locate needles in haystacks. How is this possible? A success-ful search locates a target in a manageable number of steps. According to conserva-tion of information, nontrivial searches can be successful only by drawing on existing external information, outputting no more information than was inputted [1]. In previous work, we made assumptions that limited the generality of conservation of information, such as assuming that the baseline against which search perfor-mance is evaluated must be a uniform probability distribution or that any query of the search space yields full knowledge of whether the candidate queried is inside or outside the target. In this paper, we remove such constraints and show that | conservation of information holds quite generally. We continue to assume that tar-gets are fixed. Search for fuzzy and moveable targets will be the topic of future research by the Evolutionary Informatics Lab.
In generalizing conservation of information, we first generalize what we mean by targeted search. The first three sections of this paper therefore develop a general approach to targeted search. The upshot of this approach is that any search may be represented as a probability distribution on the space being searched. Readers who are prepared to accept that searches may be represented in this way can skip to section 4 and regard the first three sections as stage-setting. Nonetheless, we sug-gest that readers study these first three sections, if only to appreciate the full gen-erality of the approach to search we are proposing and also to understand why attempts to circumvent conservation of information via certain types of searches fail. Indeed, as we shall see, such attempts to bypass conservation of information look to searches that fall under the general approach outlined here; moreover, conservation of information, as formalized here, applies to all these cases. >>
I trust the point about reading in context is clear enough.
KF
Why can’t evolution occur in a static environment? Is Aurelio really suggesting that mutations will not occur in a static environment? Isn’t Lenski’s experiment a static environment?
Joe asks:
If the environment is truly static, there will be no selective pressure.
No, of course not. That is the whole point. Variation appears due to well-understood processes of imperfect replication etc. This is independent of the environment (caveats on radiation-induced mutation etc)
Nope. It’s boom and bust.
Aurelio:
That is incorrect. There may not be any selection pressure once the population’s fitness is optimized, but there will be until then.
So what does “truly static” mean?
Except it isn’t “well understood”. Basic biological reproduction is irreducibly complex and as such requires an Intelligent Designer.
Isn’t Lenski’s experiment a static environment?
I think whether or not it is static is debateable.
Bob O’H.
There’s no problem with the rolling of a die as a search.
When you roll a die it’s an experiment.
Aurelio Smith:
Could you be any more dense?
Joe:
There is nothing to indicate that more sophisticated methods of biological reproduction did not arise through evolutionary pressure applied on simpler methods of reproduction used in the past.
On the other hand, I have yet to see anyone write something describing how Intelligent Design methods could be applied to biology.
So, I’m right in reading DEM (yourself in association with Dembski and Marks) as “die” is synonymous with “search”?
Dr Ewert,
In the hope you have time to respond further, may I ask you about something you wrote in your ENV article. I quote:
I’m curious. What do you mean by “configuration of the universe”?
A theistic worldview might be that God configured the universe to produce the outcome we see. Humankind and their parasites. This is consistent with observed evidence as far as I can tell. Evolution explains the how, not the why.
Joe:
I think Joe right. I don’t see what would stop a better configuration than a current one developing in any given environment even if that environment is static.
mung asks:
Possibly. Are you offering lessons?
Carpathian claims
“There is nothing to indicate that more sophisticated methods of biological reproduction did not arise through evolutionary pressure applied on simpler methods of reproduction used in the past.”
and yet the facts say something very different:
http://www.uncommondescent.com.....ent-561891
Carp goes on to state:
“On the other hand, I have yet to see anyone write something describing how Intelligent Design methods could be applied to biology.”
Here you go:
On the other hand, presupposing everything is just a cobbled together series of accidents, as Darwinism does, has hindered research into biology with, (dogmatically held), erroneous concepts such as vestigial organs and junk DNA
bornagain77:
Your quotes support that biology has analogies in human design but you have not shown a design methodology.
The first question to ask if I’m going to design a biological species is, “What is the future going to look like?”
If I don’t know the environment, what is my specific goal?
Secondly, how many initial copies will I make? It has to be more than two and maybe less than a million, but how do I know that?
Carpathian:
Just the evidence: The cell divsion processes required for bacterial life
There has been plenty written about it.
Aurelio:
Except that evolution has yet to explain the how. The only thing so far is we are told to be comforted by the fact that evolution did happen.
Carp, so you want to jump directly from just trying to get a firm handle on studying, and understanding, the unfathomed complexity being found in biology to creating the unfathomed complexity of biology?
Good luck with all that:
Dr. Fuz Rana, at the 41:30 minute mark of the following video, speaks on the tremendous effort that went into building the preceding protein:
Engineering principles, not Darwinian principles, lead to breakthroughs in designing new, relatively simple, proteins!:
Aurelio, that you would think that DEM mean “search” to be synonymous with “probability distribution” says all about you that Winston needs to know. And from your own mouth.
Winston Ewert:
Aurelio probably thinks evolution is synonymous with Conway’s Game of Life.
Carpathian writes:
I don’t totally rule out the possibility. It happened once before.
Are you going to bring Larry Moran out from behind that hoarding? OK, there’s drift.
Do you bother to read other comments at all?
Yes, Aurelio, I do bother to read the comments. That’s how I came up with the Game of Life reference. You see, I thought perhaps you just didn’t understand what the term synonymous means. So I performed an experiment.
Here’s a suggestion. If you really want your ip unblocked act less like a troll.
bornagain77:
That is the problem ID has to get around. ID failed here to equal biology which is exactly my point.
ID is extremely difficult in that you don’t know what information you’re trying to put together for a given target.
Evolution may be improbable in the sense that you cannot search for a target but ID’s problem is to define the required target before designing.
Determining your “spec” for a design is more difficult than actual physical design.
How would one determine a better design for a predator in an environment that is 100 years off into the future?
That is the missing information.
Without an ability to foresee future environments you cannot design a solution.
mung writes:
after Aurelio writes (in comment 16);
Advice from the master.
On the serious point, UD either allows a plurality of voices or it doesn’t. That’s up to the powers-that-be. But if they would rather I didn’t participate, they just need to say so. If they agree to my participation, perhaps they might look into whitelisting my IP. Entirely up to the UD management.
Intelligent Design and evolution are NOT mutually exclusive.
Plurality of voices is one thing. But if our opponents want to be heard all they have to do is work on supporting unguided evolution- find a way to model it would be a great start.
Carp, I believe God, who is omniscient and who created/creates time itself, is the Designer of the universe and of all life in it. Thus ‘far off targets’ in the future are child’s play for Him in His infinite knowledge since even time itself belongs to Him.
Moreover, I never claimed that man was omniscient in his capacity as a designer.
Apparently atheists are not so humble in their assessment of their own finite abilities since they, from my repeated debates with them, obviously think they know how to design things much better than God did.
Dear Winston,
I understand that it is necessary for the conclusions of DEM that searches can be modeled as probability distributions.
The first attempt to model searches generally that way was in the paper “The Search for a Search” – and it failed.
I have troubles to understand how the probability distributions introduced by the algorithm in “A General Theory of Information Cost Incurred by Successful Search” are modeling the underlying searches in any meaningful way.
Take e.g., as search space the natural numbers 1..100, and as fitness function “distance to a target”. Knowing this, I can construct an initiator, a terminator, an inspector, a navigator, a nominator, and a discriminator. Using those for the target {1}, I may get a probability distribution of P(S=1)=1, P(S!=1)=0 – my search will find this target every times.
What conclusions can I draw from this model? Nothing meaningful – for example: what happens when the target is {2}? The probability to find this target could be 1, could be 0, could be anything in between. This “model”doesn’t differ between a complete search and a search which will always return “1″…
And frankly, if my target is {1} – what is the big difference between a search represented by P(S=1) = 9/10, P(S=50) = 1/10, otherwise 0 – and P(S=1) = 9/10, P(S=51) = 1/10?
A question for Aurelio (and other critics).
There are two urns, each containing a number of colored balls. You cannot see inside the urns.
You pay one dollar to play and can select one ball from either urn. If you select a red ball you win one dollar.
Which urn will you choose to select from?
Hopefully, there’s going to be a point to this.
You’re crazy if you think anyone would waste time betting on such a stupid game. Who said that there was a red ball anywhere? Who said there was a difference between the two urns?
Mung is incoherent. Consistently.
Daniel King:
Crazy like a fox. Looks like I caught a fish though.
No one.
Given the amount of information, one urn is as good as the other.
ok, the colored balls are either red or blue. Does that help?
Section 5 of the paper discusses how these distributions are obtained.
What you’ve said is incorrect, but what you meant is probably correct. A search is any process that can be modeled as a probability distribution. A die can be modeled as the distribution {1/6,1/6,1/6,1/6,1/6,1/6}, but the die is not the same thing as the distribution. For one, I can physically stack dice, but I can’t physically stack the distributions. Similarly, a search can be modeled as a distriubtion, but they aren’t the same thing.
I mean the combination of the physical laws of the universe together with any initial conditions.
I don’t know, a lot of people play the lottery.
DiEb,
Saying that you don’t like or find useful the way we’ve modeled things isn’t a criticism of our work. It is pointless complaining.
WE:
Aw shucks. I was wrong:
Of course, unlike Aurelio, I know what I said was ludicrous.
There’s other ways to model a die. Say it doesn’t have dots on it’s six faces but colors. What prevents us from assigning a value to each color?
Winston Ewert:
Sorry, I try to make the point of my complaint more clearly:
In your paper “A General Theory of Information Cost Incurred by Successful Search”, you don’t use the terms model or modeling, not even once. You only claim that searches can be represented as probability distributions (and I thought that even this language was a little bit strong). Now you go even further and say
There are countless ways to define mathematical modeling. But the unifying concept is that a model allows for (non obvious) predictions. To elaborate:
1) The simplest model in population dynamics is representing the size of the population by a real number, and to assume that the growth is proportional to time (for short amounts of time) and size. This model allows to predict the size of a given population in the future – after measuring growth rate and population size. If the predictions fails, the model will be refined – or cast away.
2) I can represent the planets of the solar system by the length of their English names, Mercury, Jupiter and Neptune by 7, Saturn and Uranus by 6, Venus and Earth by 5, and Mars by 4. The only predictions which I can draw from this model are along the way that Mars has the shortest English name. I hope we can agree that this isn’t a model of the solar system.
So, what predictions does your model allow for? Especially, take my example in comment above (nr. 68): what can you predict about a search which finds the target {1} with certainty – if {1} was indeed the target?
I may not like your model. But what use it is even to you?
DiEb,
Evolution doesn’t predict anything.
#73 WE
I apologise. I had not properly understood this section. What would really help here would be a worked example. However, I will try to explain my concern.
I think that what you are saying amounts to:
If you define a subset of all the possible probability distributions on omega (e.g. those which make the probability of finding the target > q) then this places constraints on the probability density function of the probability distributions in M(omega).
To make it concrete consider the case where omega is just two items a1, a2. There are infinitely many pdfs possible on a1 and a2 – ranging from a1 = 1 to a1 = 0. These pdfs are the members of M(omega). At this point you have no other information about omega. So you have no idea about the higher level pdf of the members of M(omega). It might be that only pdfs where p(a1 > 0.8) are possible. It might even be that the only possible pdf on omega is P(a1 = 1). It would depend on the process for generating pdfs.
You could then define a function on all those pdfs e.g. g(pdf) = 1 if p(a1) > q, 0 if p(a1 <= q). This would enable you to conceptualise P(g(pdf)=1). But clearly you cannot deduce that probability without making some assumptions about the prior probability of the members of M(omega). And I am struggling to see where those assumptions are articulated (although I suspect you are assuming that the pdf of pdfs is uniform between 0 and 1).
DiEb:
I think, in respect of “modelling” the clip at 39 above to Bob O’H is relevant:
I suggest to you that the references to needle in haystack searches, searches and representation all directly imply a modelling approach. As, is common in many applications of mathematics to situations of interest.
Wiki:
I would suggest that Marks, Dembski, Ewert et al have been working at a mathematical modelling exercise and have been gradually making it of wider and wider applicability. They began by using flat random sampling as a reference yardstick search of a config sopace, making a reasonable case on S4S that greatly improved searches will be so case specific that a blind search of the space of possible searches when combined with the resulting search implies that the likelihood of combined success is no greater than that of a straight flat random sample in a needle in haystack context.
For me, that plausibility is strengthened by reflecting on the fact that as a search is a sampled subset of a set of cardinality W, the S4S space has cardinality 2^W. That of the power set. Which becomes much harder as W ~ 10^150 – 300 at lower relevant minimum.
So, I find your “fail” dismissal inappropriate, unwarranted and selectively hyperskeptical.
In the 2013 paper, M, D, E explicitly set out to generalise, removing the first level search from a flat random one. This they have in fact done.
They have indicated onward work that will move to shape and location shifting targets. Of course, the dominant issue is the needle in haystack challenge and linked S4S so it is reasonable that shape shifting and moving like barrier islands will not materially affect the outcome.
KF
PS: And oh yes, what relevant aspect of the sol system is modelled by representing planetary name length by letter counts? Is that not a blatant strawmam caricature on your part? (In context of recent activities by your side’s lunatic fringe, what message does resort to such lurid caricature send to such? Especially, when it is joined to blanket dismissiveness of a serious case? Please, think again on how you are arguing, given the LF.)
MF,
kindly cf the just above to DiEb.
A blind search of a space dominated by non-function and with small targets with inadequate time and atomic resources to sample enough of the v. large space W to make detection of an island Ti likely, is a challenge.
The set of searches on W is a set of the subsets of W. The cardinality of the onward space for S4S is 2^W, W starting out at 10^150. Blind search for a golden search is so hard that combining it with an improved search will be harder, much harder generally on avg, than direct blind search.
All this, in a context where the target is FSCO/I rich configurations. We already know that there is a known, adequate cause. Intelligently directed configuration that injects active intelligently sourced configuring information as a bridge that puts you down on or next to a target Ti. This then allows troubleshooting exploration to achieve adequate function through a much more restricted and feasible search.
The cluster of considerations brings us back full circle to the point that FSCO/I in a configuration and part interaction, wiring diagram based entity, is a strong sign that it’s best current causal explanation is design.
KF
Dr Ewert:
I had this remark of yours from E&NV in mind:
Then why call Darwinian evolution search? The fact that any so-called search can be reduced to a probability distribution does not mean that any stochastic process that can be reduced to a probability distribution is a “search.”
I mis-spoke and can quite clearly see the map/territory distinction.
DiEb says,
In your paper “A General Theory of Information Cost Incurred by Successful Search”, you don’t use the terms model or modeling, not even once.
I say,
I find the contrast between model and search to be interesting. It illustrates the fallacy of equating software like Avida to other actual scientific models.
from here:
http://en.wikipedia.org/wiki/Scientific_modelling
quote:
A scientific model seeks to represent empirical objects, phenomena, and physical processes in a logical and objective way. All models are in simulacra, that is, simplified reflections of reality,
end quote:
With this description we can construct the following syllogism.
Axiom) models reflect reality.
Premise one) Evolution is not searching for any specific target other than survival.
Premise two) Evolutionary Algorithms are searching for specific targets.
Conclusion) Evolutionary Algorithms are not “models” of Evolution.
peace
#80 KF
Edited (pressed enter too early)
This is wrong. Using DEM’s definition of a search – the set of searches on W is all possible pdfs over W. This is quite different from from the set of subsets. Among other things it is infinite in size while the set of subsets of a finite set is itself finite.
@ Winston Ewert,
Excuse me for adding another question to the list.
There is a commenter here, Kairosfocus, who has written much about something he calls FSCO/I
Here’s a sample:
Is FSCO/I something you’ve heard of?
If you have, do you (and as spokesman for DEM) endorse it?
AS,
Isn’t it time you paid attention to the idea history instead of trying to tag and dismiss?
Let me again cite to you the roots in Orgel, Wicken and co, for the descriptive focus on the FUNCTIONALLY specified subset of CSI. Which on pp 148/9 of NFL, Dembski highlights as the aspect relevant to biological systems:
______________
http://iose-gen.blogspot.com/2.....l#fsci_sig
>> The observation-based principle that complex, functionally specific information/ organisation is arguably a reliable marker of intelligence and the related point that we can therefore use this concept to scientifically study intelligent causes will play a crucial role in that survey. For, routinely, we observe that such functionally specific complex information and related organisation come– directly [[drawing a complex circuit diagram by hand] or indirectly [[a computer generated speech (or, perhaps: talking in one's sleep)] — from intelligence.
In a classic 1979 comment, well known origin of life theorist J S Wicken wrote:
The idea-roots of the term “functionally specific complex information” [FSCI] are plain: “Organization, then, is functional[[ly specific] complexity and carries information.”
Similarly, as early as 1973, Leslie Orgel, reflecting on Origin of Life, noted:
Thus, the concept of complex specified information — especially in the form functionally specific complex organisation and associated information [FSCO/I] — is NOT a creation of design thinkers like William Dembski. Instead, it comes from the natural progress and conceptual challenges faced by origin of life researchers, by the end of the 1970′s.
Indeed, by 1982, the famous, Nobel-equivalent prize winning Astrophysicist (and life-long agnostic) Sir Fred Hoyle, went on quite plain public record in an Omni Lecture:
So, we first see that by the turn of the 1980′s, scientists concerned with origin of life and related cosmology recognised that the information-rich organisation of life forms was distinct from simple order and required accurate description and appropriate explanation. To meet those challenges, they identified something special about living forms, CSI and/or FSCO/I. As they did so, they noted that the associated “wiring diagram” based functionality is information-rich, and traces to what Hoyle already was willing to call “intelligent design,” and Wicken termed “design or selection.” By this last, of course, Wicken plainly hoped to include natural selection.
But the key challenge soon surfaces: what happens if the space to be searched and selected from is so large that islands of functional organisation are hopelessly isolated relative to blind search resources?
For, under such “infinite monkey” circumstances , searches based on random walks from arbitrary initial configurations will be maximally unlikely to find such isolated islands of function . . . >>
_____________
Let us see if you will now respond to substance instead of trying to isolate and target personalities. Which, I point out to you is enabling behaviour in the context of the existence of the ever present lunatic fringe. As well as the tactical recommendation of one certain SDA in his notorious rules for radicals.
KF
fifthmonarchyman: Premise two) Evolutionary Algorithms are searching for specific targets.
Not all evolutionary algorithms search for specific targets.
PS: Dembski, NFL:
>> p. 148:“The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.
I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .
Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole. Dembski cites:
Wouters, p. 148: "globally in terms of the viability of whole organisms,"
Behe, p. 148: "minimal function of biochemical systems,"
Dawkins, pp. 148 - 9: "Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by ran-| dom chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction."
On p. 149, he roughly cites Orgel's famous remark from 1973, which exactly cited reads:
And, p. 149, he highlights Paul Davis in The Fifth Miracle: "Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity."] . . .”
p. 144: [[Specified complexity can be more formally defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ” >>
PPS: Meyer, in his reply to Falk’s critique of Signature in the cell:
___________________
http://www.signatureinthecell......l-falk.php
>> . . . [[W]e now have a wealth of experience showing that what I call specified or functional information (especially if encoded in digital form) does not arise from purely physical or chemical antecedents[[--> i.e. by blind, undirected forces of chance and necessity]. Indeed, the ribozyme engineering and pre-biotic simulation experiments that Professor Falk commends to my attention actually lend additional inductive support to this generalization. On the other hand, we do know of a cause—a type of cause—that has demonstrated the power to produce functionally-specified information. That cause is intelligence or conscious rational deliberation. As the pioneering information theorist Henry Quastler once observed, “the creation of information is habitually associated with conscious activity.” And, of course, he was right. Whenever we find information—whether embedded in a radio signal, carved in a stone monument, written in a book or etched on a magnetic disc—and we trace it back to its source, invariably we come to mind, not merely a material process. Thus, the discovery of functionally specified, digitally encoded information along the spine of DNA, provides compelling positive evidence of the activity of a prior designing intelligence. This conclusion is not based upon what we don’t know. It is based upon what we do know from our uniform experience about the cause and effect structure of the world—specifically, what we know about what does, and does not, have the power to produce large amounts of specified information . . . . >>
__________________
Meyer here speaks directly to functionally specific complex information in digitally coded form, but with direct application to wider FSCO/I.
I trust this isolate, tag and dismiss rhetorical gambit will now be retired. At least, by those interested in addressing substance rather than techniques of caricature and dismissal.
MF,
you first need to read the already given context of that little clip, the opening paragraphs of the MDE 2013 paper:
Secondly,the substantial matter is that a search samples from the set W with some distribution of probabilities regarding likelihood of any particular member xi being picked up, i.e. a probability distribution function across the set of members xi constituting W.
Any given search then kicks out a collection of members, which per definition is a subset of W. Therefore the space of all possible samples — and thus subsets — of W is the space from which any given search MUST come. Ranging from {} to W itself (a 100% census). of course the particular samples picked will be chosen based on the specifics of sampling, which imposes a further distribution, pointing onward to a higher yet order search.
Therefore it is entirely appropriate to point out that searches will come from a much higher scaled set, 2^W in cardinality.
Which renders immediately highly plausible the finding of M, D & E that such a search for a good search imposes a cumulative search burden that is at least as hard as a null search based on some natural sampling of the original W will.
Which as I argued just now:
http://www.uncommondescent.com.....and-fscoi/
. . . leads directly back to the blind needle in haystack search challenge imposed by the requisites of FSCO/I.
Active info is a bridge coming from designers that makes searches feasible.
KF
#89 KF
You are confusing the result of the search with the search. Two completely different pdfs may end up with the same results. DEM defines a search as a pdf. Therefore a search is not the same as the subset of W which results from the search.
MF, I have sufficiently shown
(i) that searches per whatever means applied will impose some degree of bias from zero up to absolute, in selection, which gives the distribution that Marks, Dembski and Ewert speak of (per citation),
(ii) That each search will pick a subset of the set W, so that a blind population of searches will come from the set of subsets. By necessity of what a sample is, what a subset is, and what a cluster of actual samples will be as a result.
This addresses your intended correction and shows that it is itself in need of correction. As, that (ii) is so is actually independent of the fact that (i) will be so. Both are true and both carry some relationship but I does not overturn ii. And in the sense that the authors wrote, searches do organically connect to distributions on the set W.
KF
Zac says,
Not all evolutionary algorithms search for specific targets.
I say,
Not all ID critics are incapable of having a genuine discussion of the issues.
Zac again demonstrates that he is not one of the honest critics by not providing examples and evidence for his claim.
peace
PS
That is why he is ignored so often
I elaborated on the conclusions we draw from our model over at Evolution News and Views. I don’t feel the need to repeat myself here.
See the top of page 58. We begin with any arbitrary distribution mu on omega. This is projected, as discussed in section 5, to a distribution mu-bar on M(omega). (You could as easily go in the other direction, and start with a distribution mu-bar on M(omega) and produce a distribution M on omega). That is, you can pick any arbitrary distribution that you deem the natural distribution. We are not assuming anything about the distribution, and certainly not that it is uniform.
Note that this means that you could decide that the natural distribution of the universe places a high probability on complex life. The result of conservation of information has no quarrel with you if you take that stance.
That’s not remotely what I’ve said. I’m not saying that because both searches and stochastic processes can be reduced to a probability distribution they are the same. That would indeed be incorrect.
What I’ve said is that our purposes we define a search to be a process that can be reduced to a probability distribution. So all processes no matter how insane that can reduced to a probability distribution are searches for the purpose of COI. That is simply a matter of how we’ve chosen to define our terms.
I’ve seen posts about it. I’m not inclined to take it seriously until I see it published some place more serious then a blog.
KF #91
All youp have shown is that you don’t understand DEM’s paper – much less the problems with it. But it is not worth pursuing this any more.
Mark Frank, All you and yours have done is prove that unguided evolution doesn’t have squat. Your entire position is nothing but bald declarations and attacks on anyone who questions them.
It is very telling when all you have to do to stop ID is to step up and produce support for the claims of your position and you choose to act differently.
Typical but still pathetic.
WE:
Pretty serious fine-tuning and front loading!
KF
bornagain77:
Yes. Information is the key here.
In order for ID to work, there is a requirement that the designer be able to see the future and commit almost no errors.
This is a problem for the ID movement though in that ID cannot be debated on a scientific basis if this is true.
If the claim is that no one but God can possibly engage in ID, we need faith to believe in ID. We have then moved into a religious debate not a scientific one.
MF (attn WE): I have pointed to the antecedents for the descriptivce summary in Orgel, Wicken, Dembski and Meyer from 85 on above: http://www.uncommondescent.com.....ent-562213 These will handily meet the more serious than a blog criterion. After all, all it is that I have done and others too, is to create an acronym for a stock descriptive phrase for functionally specific forms of complex specified information. And in the case of Orgel and Wicken, that was the original context. Dembski used a generalisation to speak of specified complexity in general. KF
PS: Onlookers, you will be able to judge the [want of] seriousness of onward objectors if they refuse to discuss FSCO/I on grounds that it is not taken up by “senior” ID persons.
MF, pardon directness but you are simply being personally dismissive in a context where you were specifically corrected by direct citation on the contextual meaning of the reference to probability distributions. KF
WE
Thanks for going to this effort. It is both enlightening and frustrating. I guess I am confused by what you mean by “project” or at least its implications. As far as I know a projection is just a type of mapping from one set to another. So you can map a pdf on omega to a subset of M(omega). But so what? How do you jump from this to concluding anything about the ontological status or probabilities of the two distributions? It would help to have a concrete example. Suppose omega comprises just two members A and B. Then as I understand it M(omega) is the set of all possible pdfs on omega and is defined by all the possible values of P( A) i.e. all the real numbers between 0 and 1. Can you give an example of mu and mu-bar?
Carpathian:
Not at all.
First, all designers anticipate future possibilities, we look to goals.
Second, a world of technology all around us shows that initial designs can be incrementally developed to adequate performance and reliability to be good enough for purpose.
Perfection is not required. Just a sophisticated technical base.
The PC you are reading this on is good enough as a case in point.
KF
MF, observe the context of blind, needle in haystack search, and the onward context that any given search will typically be utterly uncorrelated to where targets Ti may be found, the individual searches being of course samples of W and members of the set of subsets of W. IT is patent that it will be hard for a particular direct search to conveniently deposit us on a target Ti or close enough that it is easy to thereafter find it on an incremental narrow scope of search. Thus, we see that the search for such a golden search will put us into a higher order search for search that will come from the power set. Which will hold cardinality 2^W for a set of large cardinality W, starting at 10^150 – 10^301, the reasonable threshold for the same FSCO/I you would dismiss, that turns out to be very directly relevant to blind needle in haystack search. KF
Carp, in case you do not know, neo-Darwinism is itself based on (bad) Theological premises not mathematical premises.
Clean up your own back yard first and then we can talk.
What separates the science of ID from the pseudo-science of neo-Darwinism is that ID can be rigorously falsified by experiment, and neo-Darwinism cannot.
In fact, ID invites rigorous experimentation to try to falsify its primary claim that unguided material processes cannot produce non-trivial functional information/complexity, and that only Intelligence can (Abel; Behe).
Moreover, science cannot be conducted unless teleology is presupposed on some ultimate level.
Insisting, as materialists/atheists do, that there is no ultimate reason why anything happens defeats the purpose of doing science in the first place of trying to find the reason why anything happens.
i.e. “It just happened for no particular reason whatsoever’ is a science defeater!
bornagain77:
I agree!
An experiment is what I intend to do. I will model both ID and evolution and see which is a more powerful method for generating successful body plans.
The problem with ID is that I can see the limitations in anyone actually being able to do it, other of course than someone who can accurately see the future.
As far as dismissing evolution, I believe that evolving even the simplest self-replicating code would qualify as proof that evolution could be a viable mechanism for biology also.
kairosfocus:
I have given this a lot of thought and ID is tougher than it looks, not from the perspective of designing organism X but rather what role X should play in it’s environment.
X’s effect on other creatures and plant life in an environment could lead to extinction of other species, both prey and predator, as well as a change in the food chain.
Until you know the effects of the new organism X well into the future, you cannot release it into the environment.
KF the cardinality of the set of all possible searches (as defined by DEM) is infinite (see #100 above for a small example). But, setting that aside, you are assuming a uniform probability across the set of all possible searches. This is clearly not the case for evolution (and many other real world cases). Searches that involve non-viable steps are quickly terminated. There is a strong relationship between possible searches and the “target” which is a viable organism which has viable offspring.
You really think a comment on a blog that quotes other people and calls them idea-roots for FSCO/I qualifies as a serious presentation of the idea of FSCO/I?
Carpathian:
That doesn’t follow from anything.
Wow.
Keep them straw man arguments coming, though. They are entertaining.
What does that even mean besides proving you have no clue what is being debated?
Evolution via intelligent design is by far more powerful than unguided evolution. Try developing antennae without the specifications of what is required programmed in.
fifthmonarchyman: providing examples and evidence for his claim.
Be happy to. Thanks for asking. See Krupp & Taylor, Social evolution in the shadow of asymmetrical relatedness, Proceedings of the Royal Society B: Biological Sciences 2015. For that matter, so is Word Mutagenation.
Joe:
I have been thinking about how to implement ID. You have issues with my concerns.
If you’re better at this than I, show me how to use ID to introduce an organism into an environment.
Give me details.
I claim it’s harder than you think.
Let’s take the set {A,B}
Let say that mu = 2/3 A, 1/3 B.
The simplest way to understand mu_bar, it to think of it as a uniform distribution over the set {1,2,3}. Then we compose it with the mapping {1,2} -> A, {3} -> B.
We’re not claiming anything about ontological statuses. What we are claiming is that whatever process produced a successful search, must itself have a pdf biased towards indirectly producing the target
Carpathian:
You start by knowing what it is you are going to design. If you need a planet like earth then you have to know what it is that makes our planet the way it is. And you make it so.
And if you wanted intelligent observers you would have to know what they require. If you wanted to introduce a new organism you would have to know what it requires. That’s it.
It all depends on what the purpose is and what is physically possible. I would design my organisms with the ability to adapt to changes- either genetically or behaviourally- Intelligent Design Evolution.
But then again, we don’t have any idea how to design living organisms, so yes it is even much harder than YOU think.
I’d like a self-driving helicopter.
Now that I’ve done the hard part of coming up with spec, can you get back to me with the easy part?
Joe:
That’s not enough. I would need to know its effect on other organisms.
If I wanted to introduce a new predator into a grassland environment, what prey would it successfully end up hunting? If it hunts the prey of a current predator, that older predator population may shrink in size.
If the new predator is smaller and successfully hunts adolescents, the prey population may take a much larger hit than would be indicated by the numbers taken since much fewer prey would reach breeding age.
These are serious questions can’t be ignored if you’re going to be doing biological design.
Look at the Asian carp that have been transported to American rivers. They seem to have no natural predators and are thriving at the expense of American fish that have been here for thousands of years.
You can’t just release a new organism without carefully looking at the possible effects.
Winston Ewert:
It was easy for Ford to build the Edsel.
It wasn’t easy to get it accepted in the marketplace.
Carpathian-
An already existing design. No one on earth designed the carp.
If you are just going to quote-mine my posts then why even bother?
It all depends on what the purpose is and what is physically possible. I would design my organisms with the ability to adapt to changes- either genetically or behaviourally- Intelligent Design Evolution.
But then again, we don’t have any idea how to design living organisms, so yes it is even much harder than YOU think.
Go ahead- design a fish, I challenge you, knowing full well that you cannot do so.
Joe:
Whether an organism is designed or evolved, introducing it into the wrong environment could cause damage to other organisms already there.
That was the point I was making. For ID to work, it is not enough to design a single organism. The designer must know the future environment and the interaction of all the other organisms or he risks threatening the future of those other organisms.
The devil is in the details.
As far as designing a fish, if I managed to be able to, should I design an Asian Carp and throw it into the Mississippi?
We have evidence that it would not be good.
Try and think about it for awhile assuming that organism design was not an issue and you will find yourself stumped by the interaction of the ecosystem.
WE #111
Thanks again for continuing to respond. It is interesting.
Still struggling a bit here. I thought mu-bar was a pdf over M(omega). But M(omega) is continuous containing all values of R A, 1-R B where R is a real number between 0 and 1. Are you saying that mu-bar is the pdf over M(omega) where P(R = 2/3 ) = 1? Perhaps more importantly – what role is mu-bar playing? It appears to be something like the the pdf for M(omega) that gives the maximum likelihood for a pdf over omega that is 2/3A, 1/3B. But I suspect I am missing the point here.
“Must” or “probably is”? The resulting search might be some evidence for the pdf of the process but surely that relationship is a bit like the relationship between a Bayesian prior distribution and the observed outcome. The process creating the outcome (which is itself a pdf) has a prior pdf which is modified by the observed outcome. But you somehow seem to be claiming you can deduce something about the process based purely on the outcome and ignoring the prior.
Carpathian, Why do you keep ignoring parts of what I post?
No kidding.
That is why you have to know what it is you are trying to achieve!
One asian carp wouldn’t be an issue.
But anyway, Carpathian, let’s keep this thread open for Winston’s concerns. He and Mark appear to be getting into something and we shouldn’t interrupt it.
Joe:
I am not ignoring what you post. I am responding.
My point was ID is more difficult than the design of organism X. Introducing X without understanding the ramifications of what X’s side effects could cause the loss of your already successful previous designs.
Your response to me was to say it’s easy, just see what’s needed, but you simply hand-waved away the reality every manufacturer has before they introduce a product into the market. In some cases a new product eats into the profits of a previously established one by the same manufacturer.
I’m not trivializing ID. It has bigger problems than simply designing X. The spec must be well defined taking into account the effect of X on the environment.
That means the relationship of the whole ecosystem must be taken into account in the final design of X.
Joe:
Ok.
Hi, Dr Ewert:
You write:
And also:
This seems clear. But are you, therefore, saying anything more than “Probability distributions in which certain outcomes are likely must be produced by processes that make those outcomes likely?”
Is there, in other words, any reason to pick one distribution as the “natural” distribution against which other processes are “unnatural”? Or have I misunderstood you?
ZAC says,
Be happy to.
I say.
Just as I have come to expect from you more smoke-blowing
Word Mutagenation clearly has a target of English phrases and The details of the EA in the paper you mention are behind a firewall so it’s target can not be ascertained.
It’s self evidently clear however that any EA will need criteria to determine which virtual organism is selected. We call that criteria the target
Come on Zac, please at least try and feign that you care about what is actually being discussed.
peace
Please tell me what criteria
fifthmonarchyman: Word Mutagenation clearly has a target of English phrases
English words are the fitness landscape. The only target is successful reproduction.
fifthmonarchyman: It’s self evidently clear however that any EA will need criteria to determine which virtual organism is selected.
Then biological evolution has a target: successful reproduction in the natural environment.
ZAc says,
Then biological evolution has a target: successful reproduction in the natural environment.
I say,
I have no problem with this characterization. It’s your side that is claiming that evolution is not a search.
If evolution has one target it can not be predictably counted on to reach a second unrelated one.
So if we observe an improbable outcome in a population that is not merely “successful reproduction in the natural environment”. We can not credit it’s origin to evolution
peace
fifthmonarchyman: I have no problem with this characterization. It’s your side that is claiming that evolution is not a search.
Search usually implies a specific goal. Some evolutionary algorithms are used to find solutions to specific problems. They have an endpoint.
On the other hand, life navigates a changing landscape, and many evolutionary algorithms model this process.
It is. My attempt to explain the distribution in an accessible manner failed rather badly there.
Let me try again:
Let’s start with the uniform case. Suppose we have the 26 letters of the english alphabet, and have a uniform mu. The mu-bar is thus the uniform distribution over possible pdfs. i.e. 0 to 1 for all various 26 letters.
Now instead, consider that we have a choice between two categories: vowels and consonants. The natural distribution is 5/26 for vowels and 21/26 for consonants. That gives us mu. Note that this mu is essentially the uniform mu composed through a mapping.
Now, to construct the mu-bar for this case, start with the uniform over all pdfs from the previous case, and then compose it with the same mapping. So a search from before:
a=.5 e=.25 z=.25
becomes
vowels=.25, consonants=.75
b=.25 a =.25 r=.25 t=.25
also becomes
vowels=.25, consonants=.75
which means the probabilities will combine, making the probability of this search twice as much. (Of course, it will end up being more then twice because they are a lot of searches will map to the same search.)
Indeed, I meant “probably.” You could just be excessively lucky instead.
That is indeed all that conservation of information claims.
Without introducing philosophical assumptions, no.
zac says,
On the other hand, life navigates a changing landscape, and many evolutionary algorithms model this process.
I say,
geez
I give up, It is impossible to talk to you
peace
Thanks, Winston, for that response.
So what, in that case, does the Conservation of Information Law tell us, apart from the fact that if one thing is more probable than another thing, something must make the second thing more probable?
Why do we have to call that something “Active Information”?
It’s not as though the probability ratio tells us the probability of that process occurring.
For instance, let’s take the probability of finding a car wedged in the top of tree. In a tornado-free environment, this is vanishingly unlikely. In a tornado-prone environment, the probability is quite a lot higher. But the ratio of those probabilities (or the difference between the logs), which you call “Active Information”, is not the probability of a tornado prone environment, right? So what is it the probability of, and why should it matter?
Liddle,
I wrote about what the laws tell us in my post at Evolution News and Views. Rather then me repeating myself, I’d request that you go read that.
Hi, Winston.
I have already read (twice) your article at ENV. I would not be asking you these questions here if I thought the answers to them were in that article.
In that article you wrote:
(my bold)
That last statement is totally uninformative. It seems to me to be tantamount to saying that the LCI tells us that A is either B or not-B: evolution works either because of design, or not because of design (“due to the configuration of the universe”).
That last thing (“the configuration of the universe”) can only be worth remarking on if you think that such a configuration is very improbable. But you do not say so, nor do you give us any idea as to how you would even work out such a probability. Indeed, you say that your argument is not a “fine-tuning argument”, so you are not even arguing that there is anything improbable about a universe configured in such a way as to facilitate evolutionary processes.
Moreover, nowhere in your ENV article that I can find do you tell us what the ratio of p/q is the probability of, which was my second question.
Sure I am a “critic” – but I’d be perfectly happy to hear the “metaphysical assumptions” that you think I would be “unlikely to accept”. But unless I hear them, I have no clue as to whether I’d accept them or not.
Go on – I might surprise you!
Elizabeth- Just model neo-Darwinian evolution and be done with it. Then people can have a look and show you where you failed or how your model failed to demonstrate anything (your CSI example on TSZ was such a failure).
Just remember natural selection is an eliminative process and it is blind and mindless.
EL says.
Sure I am a “critic” – but I’d be perfectly happy to hear the “metaphysical assumptions” that you think I would be “unlikely to accept”
I say,
Why do we even have to get into metaphysics? Why not just deal with the implications of the paper?
It seems to me that any metaphysical position at all would be compatible with this result. The only thing that is at issue is the strength of Evolution as a search.
Understanding the limitations of evolutionary searches doesn’t have to be about theism verses atheism does it?
peace
We don’t have to, fifthmonarchyman – but Winston referred me to his ENV piece in which he implies that to understand what the Law of Conservation of Information has to say about a Designer, that’s what we’d have to do.
If we don’t, the LCI appears to amount to no more than: evolutionary success is either due to design or natural processes. So not an ID argument at all.
As for “the limitations of evolutionary searches” – no, they don’t have to be about theism versus atheism at all. I’d say they have nothing to do with either. What is fascinating about evolutionary searches – or rather, the evolutionary processes that underlie the adaptation of populations to their environment – is that they have very clear limitations, which is precisely what enables us to test hypotheses about them. Certain things can be done easily by human designers that can’t be done by evolutionary processes, and vice versa. And the pattern of biological characteristics, interestingly enough, is just the pattern you’d predict from evolutionary processes, and not from human designers, namely, nested hierarchies; no wholescale transfer of solutions from one lineage to another; retrofits rather than radical redesigns.
On the other hand we know – because we can utilise them in the evolutionary algorithms we use to solve intractable problems – that they can also find solutions that escape human designers. They do this because unlike us, they have no inhibitions about exploring apparently unpromising lines of development. And, contrary to what many often assert here, they often travel quite far down apparently disadvantageous tracks, where a human designer would turn back, discouraged. And yet often, the breakthrough turns out to be at the end of that track.
I speak metaphorically but the metaphor applies pretty well – many times I’ve had a solution to a problem have an ancestry that involved a large number of disadvantageous steps.
Elizabeth,
No. The possibilities are that active information was:
1) Injected into the universe via design.
2) Present at the original configuration of the universe.
3) Gained over time through stochastic processes.
The math rules out possibility number 3. That is why the remaining options are design or initial configuration of the universe. I’m not making any claims about the probability of the configuration of the universe. I’m merely pointing out that every has to be traced back to the configuration, you can’t appeal to an increase in active information after that point.
Furthermore, my post discusses the point of the COI in the paragraphs immediately after the one that you quoted. That is the section I was intending to refer you to.
It isn’t a probability. Its a measurement of the bias of a search towards a target.
- Winston
Elizabeth:
That is incorrect. Whatever computers do they do because of human designers.
Umm, they do that because of TIME- as in they can run more trials in less time. They have VIRTUAL resources. They don’t have to actually build every iteration.
Come on Elizabeth, even you should be able to do better than that.
So to compare to humans you have to have millions of engineers working somehow in sync yet taking differing paths to the solution.
Computers do what they are designed to do. Everything they do traces back to a human designer. They are just tools.
Natural selection is different in that it is a process of elimination. Whatever is good enough survives.
From “What Evolution Is”, Ernst Mayr (one of the architects of the modern synthesis) page 117:
Page 118:
The evolutionary processes computers use are akin to selection.
Elizabeth:
Umm evolution is too messy to produce a nested hierarchy. Darwin went over that in 1859. Mayr went over that, Denton went over that and recently, in “Arrival of the Fittest”, Andreas Wagner went over that.
Nested hierarchies require distinct groups. Transitional forms would blur all lines of distinction.
And BTW, the US Army is a nested hierarchy and it has nothing to do with evolution or descent with modification. Linnaean taxonomy, the observed nested hierarchy in biology, also has nothing to do with evolution nor descent with modification.
This is what happens when TSZ doesn’t allow dissenting views. Its regulars wallow in their own ignorance.
Winston,
As evolutionists have pointed out, the target with respect to neo-Darwinian evolution is to survive and reproduce. And guess what? They start out given populations of living and reproducing organisms so they are already there. Target reached. The rest is all contingent serendipity.
What’s not to like with a concept like that?
fifthmonarchyman: I give up
It’s not that difficult. When someone says evolution is not a search, it’s because there’s no specific goal. Think of it as simply trying to keep one’s balance on a constantly shifting landscape.
Liddle:
This is obviously not true. A nested hierarchy is what we expect from human intelligent design over time with some multiple inheritance sprinkled in. In fact, almost all modern software programming languages enforce a strictly nested class hierarchy. C++ allows multiple inheritance but it is used sparingly in the business.
fifthmonarchyman-
When someone says evolution is not a search and that’s because there’s no specific goal, they are really telling you that it is all contingent serendipity and that it should never be mistaken for a scientific concept.
Mapou 143- Nice job. Even though human design can violate a nested hierarchy doesn’t mean they all have to. OTOH gradual evolution will always produce transitional forms that will blur the nice, neat lines of distinction nested hierarchies require.
zac says,
When someone says evolution is not a search, it’s because there’s no specific goal.
I say,
Just as I said
you say,
Think of it as simply trying to keep one’s balance on a constantly shifting landscape.
Again just as I said.
We once again seem to be in agreement on a minor point but instead of noting that and simply moving on to more important stuff you insist on rephrasing. You do it Ad nauseam here and just as often you will slip in a red herring if you can to try and change the subject entirely.
We end up with comment after comment and nothing substantial is ever addressed clogging threads that could be interesting with blah blah blah. I blame myself for continuing to try with you when there are others that are honest critics.
peace
MF, Genomes are 4-state per base systems, which imposes a finite and discrete set of possibilities. When we have a space of possibilities W, the set of samples on said space will come from the set of subsets, of cardinality 2^W. And that seems to me the operative context. Going on to the evolutionary computing case, inherently you are dealing with a bitwise granularity, which is discrete and finite. Yes, you may work with the continuum [not least as calculus is generally handy to work with], but you are going to come back to a fine grained, discrete and finite case. Which we should not forget. KF
PS: I should add that the exploration of possible molecular states can also be cellularised, based on the inherently discrete nature of molecules and the effective speed limit of chemical level interactions of relevant type ~10^-14 s.
Elizabeth Liddle:
Given that intelligent design is a natural process that’s a false dichotomy.
WE, I think I need to note that my point has always been that all I have provided by using the abbreviation FSCO/I is an acronym for a descriptive summary of the functionally specific subset of complex specified information. That concept is well established. Which, is what I cited. It is also a readily observed phenomenon, starting with the strings of glyphs used to communicate coded information we are all using in this thread and the similar strings in DNA and proteins. Wiring diagram organised entities can readily be reduced to similar descriptive strings, as is commonly done with appropriate software. KF
Carpathian, yes, design is tough to do. Especially when designed items have to function in a complex and partly uncontrolled and dynamic environment. That is why for instance central economic planning failed. But incremental development that has built-in robustness and adaptability, backed up by empirical testing and development with a healthy dose of stabilising negative feedbacks tends to work out fairly well. Robustness, redundancy and adaptability tend to be more effective than overly brittle optimisation on objective functions . . . if you can get away with that. Beyond, I would not infer from design of life to a designer or designers of effective omniscience. That has been on the table for thirty years of the modern design school of thought, here, Thaxton et al. KF
My apologies. That’s what I get for commenting on something I know nothing about. I was under the impression that you were trying to do something more novel then applying an acronym to the ideas of other people.
Winston Ewert wrote:
I was under the impression that he was trying to further develop the ideas of other people such that a wider audience can understand and appreciate them. And the acronym just further specified the subset of CSI- Dembski’s CSI.
Perhaps, I really haven’t followed FSCO/I enough to know. My only thought is that if it is a worthwhile development, I’d really like to see it published in a paper or conference.
WE #139
Thanks for continuing to be involved. I know how time consuming and irritating it can be responding to multiple interrogators.
This raises two questions:
1) Biased as compared to what? What does unbiased look like?
If you cannot define unbiased then it seems your assertion amounts to:
Either the initial configuration of the universe was such that what happened subsequently was possible or a designer made it possible. True but not very interesting.
2) You call the –log base 2 of (p/q) active information. But you say p/q is not a probability. Yet in other contexts you define information as –log base 2 of a probability (e.g. endogenous information and exogenous information. It seems like active information is a different kind of thing from other kinds of information.
WE #128
Thanks also for your efforts to explain mu and mu-bar. I am still struggling but let me try rephrasing what I think it might mean in my own words. I think you might be saying:
For any pdf mu that gives a probability P of “hitting a target” it is possible to find a higher level pdf mu-bar that creates pdfs that in total have the same probability of “hitting the target”.
Is that it?
Joe:
I see this as another huge problem for Darwinian evolution. Where is the blur? I’m sure there is yet another just-so, pseudoscientific story to explain it. Elsewhere, you mentioned Darwin’s extinction hypothesis but it’s obviously a non-explanation. Are there others?
WE, came back by overnight. Appreciated. We have differing foci and emphases. For me, over years, the functional subset of CSI has proved fruitful (and especially digitally coded strings such as in DNA); where I note that Dembski and Meyer have in fact pointed to that subset and its significance in what Wallace once called the world of life. Historically, that is the context in which CSI was recognised as a significant characteristic of life forms, as Orgel and Wicken noted. I will normally briefly explain or expand the acronym when I use it. KF
#147 KF
We were discussing M(omega) the set of possible searches of omega. The number of possible searches is not the same as the number of possible subsets of the search space. Although the search space may be discrete, the set of pdfs on that search space is infinite (in fact uncountably infinite). That applies even if there is just one item in the search space with two possible values. Ask Winston if you doubt me. DEM have defined a search in such a way that it is equivalent to a pdf on the search space. Therefore there are an countably infinite number of searches (as defined by DEM).
I don’t know what you mean by “operative context”.
MF,
as an exercise in pure math, one may indeed assign an uncountably infinite set of objective functions to a space.
But, I suggest, this loses sight of what we are addressing.
Performance has to be exhibited in time and space.
In the hoped for evolutionary process, it takes generations for distinct sub populations to emerge and sort out superior/inferior performance. And 20 minutes or 20 years makes little material difference to the resulting process lags and memory-of-the-past cumulative effects that lead to granularity as a reasonable approach. For you and I to be here, generations of successful reproduction had to have happened, across time, leading to lagged effects.
In computing, every step and cycle are granular in value and time.
Atoms and molecules have an effective speed limit to chemical level interactions relevant to forming both monomers and chained macromolecules that appear in biological systems, ~ 10^ 13 or 14 per second.
And so, we come right back to the relevant finite and discrete nature of what we are dealing with. In short, A/D conversion is natural to the case and will impose granularity.
It remains so that WLOG, a system config can be described per wiring diagram on a structured set of Y/N q’s, yielding a bit string, inherently discrete. For a bit string of length n, W = 2^n gives the number of possibilities.
Then, samples taken from the set will be subsets, and the number of possible subsets is indeed 2^W.
For n = 500 – 1,000, we have that 10^57 sol system atoms or 10^80 for the observed cosmos at 10^13 – 14 actions/s, will explore 10^87 – 88 or 10^110 – 111 possibilities in 10^17 s. Which is an order of magnitude value for timeline since the typical dating of the singularity. The result is, the needle in haystack search challenge relative to 3.27* 10^150 or 1.07*10^301 possibilities. Where also the power sets take in every possible individual sample of the sets; which will be finite.
So, on a reasonable assessment, there is indeed reason to consider the situation from this angle, and it sends the message that needle in haystack search challenge will dominate relevant cases. For we cannot explore possibilities, develop configs, exhibit and filter performance in infinitesimal increments of time or space.
So, while taking the granular view does not confine us to a flat random sampling as the way to explore possibilities, a golden search does point to a higher order search for a search, and it is reasonable to see this as confronting a power set abstract space. One may impose a further golden search — why? — but the regress of exponentiation is already evident. And, we already are at the practical point of implying that the laws and initial circumstances of the cosmos would have had requisites of life written into them in astonishing ways.
In short, you have suggested, inadvertently, a fine tuning, cosmological programming argument at the root of the physics of the cosmos.
And if design is at the table from that level, then there is no good, non-ideological reason to exclude it thereafter, at OOL or OOBP up to origin of our own body plan.
Worse, step back a moment and allow a non-countably transfinite set of possibilities for objective functions. The search for search challenge just exploded in scope. Of course, in practice, we will see clusters that boil down to re-imposing granularity for practical purposes. But not enough to help your case.
KF
Winston in 138:
No. The possibilities are that active information was:
1) Injected into the universe via design.
2) Present at the original configuration of the universe.
3) Gained over time through stochastic processes.
Number three is the problem. Evolution combines a stochastic process (mutation) that generates information with a “fact checking” process (natural selection) that rejects the information that hurts the organism. The information that isn’t rejected is either useful to the organism or at least neutral. This makes evolution a “ratchet” that continually adds useful or neutral information to a genome while rejecting the bad information generated by mutations.
Have you ever noticed that Dembski’s Explanatory Filter can’t even handle this two step process? It asks if the process being tested is random OR lawful, but you can’t even enter a process that uses both into it.
What else do you think Dembski is overlooking?
Hi, Winston. Thanks for your response. You wrote:
You define, “Active Information”, simply as ratio of the probability of X occurring, given process A, and the probablity of X occurring, given process B. So under that definition, all the “Active information” is is the degree to which X is not a flat probability distribution.
In other words “Active Information” is simply a measure of how lumpy the probability distributions are in the universe.
So why does “the math” (and in what way does the math) “rule out possibility number 3? Stochastic processes can indeed make what is originally flat, lumpy.
For instance, let’s take a deep tray of pebbles, of assorted sizes, each size, well mixed, and with a frequency distribution such that large ones are no better represented spatially than small ones. Your target is large pebble (99th percentile). Pick a pebble from the top. As they are perfectly mixed, your chances of a picking a large pebble is no better than your chances of any other pebble.
Now shake the tray. What happens next is a stochastic process. That process results in the big pebbles arranging themselves on the top, and the small ones further down, the tiniest ones being on the bottom. Now pick a pebble from the top. It is highly likely to be a large pebble.
So shaking the tray has inserted Active Information. Gained over time by a stochastic process (shaking the tray).
If not, why not?
In which case all you are saying is mainstream physics: that entropy is always increasing over the whole system – that you need to import energy to reduce local entropy (as I did when I shook the tray of pebbles). But we know this is possible – we can do it with pebbles, and plants do it with photosynthesis. Tornadoes do it. Adding energy to a system frequently reduces local entropy.
So what has the LCI got to add that isn’t just a restatement of Boltzmann?
The passage after the one I quoted reads:
Which still doesn’t till me “the point of the COI”! Of course “Darwinian evolution is incomplete”. All science is incomplete – and always will be. Sure, Darwinian models don’t attempt to account for the existence of the physical and chemical laws that make Darwinian processes possible. IDists like to claim that ID is not “Designer-of-the-gaps” – but that seems to be entirely where your paragraph above is going. Or, if that isn’t where it is going, what is it you are trying to say? As you note:
Precisely. It doesn’t.
The thing is, Winston, it seems to me that the further you, Dembski and Marks have travelled down the road Dembski embarked on with “Specified Complexity” and “No Free Lunch” (and I actually commend you in particular for this) the more, it seems to me, it turns out that the “Design Inference” is no more than the conclusion that the universe must have started with properties that facilitated non-uniform distributions of events. In other word, that it started out, if not lumpy, with the capacity to become so. Not only that, but it has a property of “1/fness” which is certainly interesting – it contains variability (Information, if you will, or Shannon Entropy) at multiple scales, from sub-atomic to inter-galactic.
But we cannot infer a Designer from such a property, at least not from the probability of a universe with such a property, because we do not know the pdf of possible universes. It may be that lumpiness is a necessary property of existence. Ontologically, what could be said even to exist in a totally flat universe?
OK – in any case it’s more like an odds ratio, not a probability (my bad). But you could also express it as a measure of the increase in probability of an event, given a process that is not present at baseline, right? So you could write it as:
p(X|process B)/p(X|process A).
where X is a “target”, A is the baseline process (e.g. one with a flat pdf), and B is the process of interest, e.g. one in which some outcomes are more likely than others. Yes?
In which case you could simply convert it an actual OR:
[p(X|process B)*1-p(X|process B)]/[p(X|process A)*1-p(X|process A)]
Then you’d simply have a measure of how much more likely X is, given process B than it is given process A. And if you regarded process A as one in which all outcomes were equally probable (as Dembski often does), then, Active Information simply becomes an normalised expression of how much more probable X is under the process in question than it would be under equiprobable random draw.
Where does this get us, other than to the conclusion that the universe is non-uniform?
MatSpirit, The key problem is not incremental change within deeply isolated islands of function imposed by the requisites of interactive function arising from coupling many parts, but to initially find the islands of function, the viable body plans. In short variation of finch beaks among existing populations is one thing, arrival of flying birds as a body plan is quite another. And — once a priori evolutionary materialism is not imposed on the issue — there is simply no adequate body of observationally grounded evidence for an incrementally advantageous step by step treelike blind watchmaker path from microbes to Mozart, mango trees and molluscs etc across a continent of viable forms feasible to traversal in a few thousand MY. Not to mention, the challenge to bridge from chemicals in a pond or the like to a cell based first life form. And it is in that context that needle in haystack search challenge to find shores of function becomes pivotal. Hence WE’s 3-point cluster. KF
MF says,
Either the initial configuration of the universe was such that what happened subsequently was possible or a designer made it possible. True but not very interesting.
I say,
I find it to be interesting.
What happened subsequently was the awe inspiring spectacle that is life. We are used to attributing this majestic panorama to evolution and now we know the process is not up to the task. That is cool info to have.
EL says,
So shaking the tray has inserted Active Information. Gained over time by a stochastic process (shaking the tray).
If not, why not?
I say,
No you have not inserted active information.
The fact that the bigger pebbles will rise to the surface is a consequence of the laws of physics that are already present in the overall system from the beginning.
You don’t add any information by letting the system play-out according to those already existing laws.
The knowledge of what will happen when you shake already exists in your mind or you would not choose to shake in the first place.
We have active information from the preexisting laws and/or from your preexisting knowledge. No information whatsoever is added with the shaking.
The resulting increased probability of picking a big pebble could be accurately predicted before you even touched the tray.
peace
PS: Once we see the explanatory filter on a per aspect basis, it does address the hoped for effect of joint incremental chance and necessity. In particular, observe that the issue is a joint complexity-specificity condition that implies increments of 500+ bits of information. That is, bridging to islands of function. Much smaller increments within islands of function would be well within the reach of chance to explain high contingency aspects. Where, mechanical necessity does not explain high contingency aspects of an object or process but instead lawlike necessity where closely similar initial conditions lead to closely similar outcomes such as F = m*a.
fifthmonarchyman: We once again seem to be in agreement on a minor point but instead of noting that and simply moving on to more important stuff you insist on rephrasing.
Here’s is your statement again:
fifthmonarchyman: Premise one) Evolution is not searching for any specific target other than survival.
Premise two) Evolutionary Algorithms are searching for specific targets.
Conclusion) Evolutionary Algorithms are not “models” of Evolution.
Premise two is faulty. Some evolutionary algorithms search for specific targets; some do not. Hence, your syllogism is faulty.
fifthmonarchyman: We end up with comment after comment
Sure. That’s what happens when you lose track of the thread, and we then have to repeat your original contention.
Mapou: almost all modern software programming languages enforce a strictly nested class hierarchy.
Sure, but that doesn’t mean that when we look at human artifacts generally that they form a nested hierarchy.
As predicted Elizabeth just ignores her outrageous errors about nested hierarchies and computers. Willful ignorance it is then, eh, Lizzie?
Zachriel- Only intention can produce a nested hierarchy. Nested hierarchies are all artificial.
Well, I could make the same assertion about you, Joe, and, I submit, with more justification.
That fact that you think I am in error doesn’t make it so.
The possibility remains that you are.
What assertion, Lizzie? I made my case against you and I will and can defend it. Let’s see what you have and then we can tell who is right. However it is a given you won’t even address what I posted that proves my points.
No, you didn’t, Joe. You just asserted I was wrong. Well, I’m asserting you are. See how that works?
No, Lizzie, I made my case in two posts above- posts 139 and 140- you lose
Winston,
Is it your contention that any configuration of matter is information?
In #140 Joe wrote:
No, it isn’t, and Darwin didn’t say so.
<blockquote.
Nested hierarchies require distinct groups. Transitional forms would blur all lines of distinction.
You have misunderstood the meaning of the term “nested hierarchies” then. Try “phylogenies” – it means the same thing, and they do not require discrete groups.
An observed nested hierarchy (or phylogeny) is just that – an observation. Linnaeus observed that the properties of living things produced such a hierarchy. Darwin posited, firstly, that such a hierarchy could arise from common descent, but that that in itself wouldn’t account for adaptive change over the generations. His theory of Descent with Modification and Natural Selection accounted for adaptive change.
It most certainly does allow dissenting views. What it does not allow is the posting of porn/malware (or links) nor does it allow the posting of personal info. Those are the only things that will get a member banned.
Apart from that, you can post any view you like at TSZ. We have only banned two people.
fifthmonarchyman wrote:
OK, say it was an earthquake then.
OK, fine. If you don’t count tray shaking as Active Information addition, then I am happy to stipulate that the Universe already contained the information required to allow tray shaking.
In that case Winston’s three options are, as I said, two, and we are no forrarder.
Design and/or an inital low-entropy (i.e. lumpy, non-uniform) universe.
Why should we infer Design?
Zac,
very last comment on this
It might have been nice to explore exactly what targets EA seek.
But that ship sailed and was lost at sea in the midst of boring comments about whether or not Evolution itself is a search and whether English phrases are targets or fitness landscapes.
Blah blah blah ZZZZ
peace
Lizzie I have quoted Darwin, so you lose. Phylogenetic trees are not nested hierarchies. You are confused. And Darwin did not say that common descent would produce a nested hierarchy. You are bluffing or lying.
and
and
Elizabeth doesn’t know what a nested hierarchy is nor what it entails.
fifthmonarchyman: It might have been nice to explore exactly what targets EA seek.
Evolutionary algorithms don’t always have specific targets, but can have fitness landscapes that the replicators navigate. We provided a couple of examples, one from the scientific literature that was indirectly cited by News in another thread, which is why we provided it. See Krupp & Taylor, Social evolution in the shadow of asymmetrical relatedness, Proceedings of the Royal Society B: Biological Sciences 2015.
Word Mutagenation doesn’t have a target. Rather, the replicators explore the landscape without regard to finding any particular position on the landscape.
fifthmonarchyman: But that ship sailed
While relative fitness changes based on what other replicators are doing, you can even change the dictionary itself. As long as those changes occur gradually, then the replicators would track along with those changes, like a ship on the waves.
This is all standard-standard for evolutionary algorithms.
Joe
Yes, they are, Joe. So if you mean something other than a tree structure by “nested hierarchy” then I don’t.
It is biased compared to whatever you take to be your natural distribution.
Indeed, it is somewhat different then other types of information.
The paper uses a particular mu-bar derived from bar which ends up with the same total probability. There would in fact be many different mu-bar that would end up with the same probability of hitting the target as mu. I don’t believe it really matters which one you end up using.
It is the measure of how biased the distribution is towards a particular target. If the universe is lumpy in arbitrary ways that don’t tend toward the target of interest, the universe won’t have active information.
Where did the tray shaking process come from? If it was in your “universe” from the start, then you always had a high amount of active information towards large pebbles. If you introduced it later, you are interfering with the universe injecting active information after its creation.
I say that active information is non-increasing. Since entropy is also non-increasing you decide that this means that active information and entropy are the same thing. They are not the same thing.
That’s not what I intended to say and what I elaborated in the following paragraphs. I’m not asking Darwinian theory to give an account for the laws of nature. That is outside of the scope of the theory. I’m asking Darwinian theory to make explicit the assumptions about the nature of fitness landscapes and physical laws. I’m asking Darwinist not to assume that the fitness landscapes and laws of physics don’t matter, but that theory has to assume something about the nature of the fitness landscapes in order to work.
Elizabeth- Phylogenetic trees are not nested hierarchies. Period and I can provide a reference if you really need one.
And just because a nested hierarchy can be depicted as a tree does NOT mean all tree patterns are a nested hierarchy.
A Summary of the Principles of Hierarchy Theory That would be a start.
You have absolutely no idea what a nested hierarchy is even though I told you.
Winston Ewert: I’m asking Darwinist not to assume that the fitness landscapes and laws of physics don’t matter, but that theory has to assume something about the nature of the fitness landscapes in order to work.
Actually, it’s a crosscheck. Evolution tends to work best when there is a ordered relationship between the genotype, phenotype, and environment. We have many observations which show this ordered relationship. Conversely, evolution tests the landscape, and historical evidence shows that the landscape exhibits properties amenable to evolution.
EL says,
Design and/or an inital low-entropy (i.e. lumpy, non-uniform) universe.
Why should we infer Design?
I say,
It is not a question of why we should infer design. We are hardwired to infer design. We have no choice in the matter.
The only question is whether we have any valid reason to abandon that preexisting inference.
On the other hand we have no natural inclination to expect an uncaused low-entropy universe. It is a forced conclusion. Why make it?
peace
fifthmonarchyman: The only question is whether we have any valid reason to abandon that preexisting inference.
In science, all presumptions have to be taken skeptically—especially intuitive notions of design, which have historically been misleading.
zac says,
In science, all presumptions have to be taken skeptically
I say,
Skepticism is good. Hyperskepticism not so much.
Skepticism says “I’m willing to explore other explanations if they arise”
Hyperskepticism says “I will disregard my hardwired impressions until I am given illreputable proof of their validity”
You say,
—especially intuitive notions of design, which have historically been misleading.
I say,
This is simply incorrect. In my everyday life I’m much more likely to incorrectly attribute the artifacts of design to “natural processes”.
I assume that you are referring to our discovery of proximate causes but that sort of thing does not in anyway prove that our initial impressions were misleading. The process goes something like this.
1) I notice that the large pebbles are on the top of the tray and infer design.
2) I discover the tray has been shook and this shaking can cause large pebbles to move to the top.
number 2 does not invalidate number 1
peace
Hi, Winston
That was not my reasoning! It would be very strange reasoning, as entropy is always increasing! And in any case, it would be fallacious, even if the premises were true, which they aren’t.
I’m interested in your answer as to why they are different, but let me explain why I think they are related, and why I don’t think you conclusion is any different from the conclusion that the universe started with low entropy, which is the reason life is possible, but which leads to the conclusion that ultimately it will cease.
Entropy can be described, informally, as “lumpiness” or, slightly more formally as “non-uniformity”. If entropy is always increasing, then the ultimate fate of the universe would be “heat death” – a completely undifferentiated universe (hence the ultimate end of life).
And thermondynamic entropy, as you know, has a very similar definition to Shannon entropy, give or take a constant – it’s the sum of pi*log pi, where pi is the probability of the ith possible microstate of the system. Shannon entropy is the same, except that pi is the normalised frequency (or probability if you like) of the available patterns.
Shannon entropy is thus maximised in for a uniform probability distribution, which means that a channel in which the symbols have a uniform probability distribution has a greater width than any channel with the same number of symbols but a non-uniform distribution, i.e. can carry more information. So as a rather dangerous shorthand, we can equate high Shannon Entropy with high information content, although really all it means it that it has high information capacity.
So what we can say is that in a toy universe in with high thermodynamic entropy, i.e. a fairly uniform distribution of possible microstates, no one microstate is very probable, and so the chances of any given microstate occurring at a given time is low. On the other hand, in a toy universe in which the entropy is low, the chances of certain microstates occurring might be very high (and others far lower). So high thermodynamic entropy means that you would need a lot of information (in the usual English meaning) to know when to look at the system in order to find a target microstate. In contrast, low thermodynamic entropy means that, as long as your target microstate was one of the high-probability ones, you would need very little information as to when to look – hang around for a few minutes and one will turn up.
This means that in a universe in a low entropy state (which ours was, and still is, compared to what it will eventually be), the probability distribution of microstates is not flat. So we have lots of microstates that are really quite common, even though, when the universe is in a high-entropy state, they’d be very rare. For instance, in a universe in a high entropy state, you are vanishingly unlikely to find a room that is warmer at one end than the other. In when entropy is low, on the other hand, it happens quite often! Similarly, complex configurations, such as vortices, are common in a low entropy universe, even though they are extremely unlikely in a high entropy universe. Thus, compared to what is likely to occur in a high entropy universe, many extraordinary things are really quite likely in a low entropy universe – tornadoes, for instance.
You can see where I’m going with this, I hope. If target X has probability p in a high entropy universe, but probability q in a low entropy universe, then the Active Information represented by the low entropy state becomes equivalent to the entropy differential. Therefore, the Active Information required to make vortices, and chemistry, and, indeed, Life, possible, was indeed present at the start of the universe – embodied in its low entropy state, i.e. the state that gave it its extreme non-uniformity; its tendency to clump; its tendency to form a wide variety of elements of different weights; its tendency to give rise to energy humps and wells; in other word, the properties we call Physics and Chemistry, and what I have also called its “1/f-ness” – variability at a vast range of scales from the sub-atomic to the inter-galactic. And as entropy increases, the differential between what is probable in a flat universe (maximum entropy; maximum flatness of pdf) and what is probable in a lumpy universe, diminishes. So indeed Active Information will decrease over time. “Information”, in your formulation will still be conserved, as the total is still pi*log pi when the distribution is completely flat.
We could thank the Designer for granting us a universe that started in a low entropy state, but I’m not sure we can infer her existence from her apparent gift
ETA: subscripts don’t seem to work Hope you can figure out my subscripted i’s.
Joe:
In that case, where I wrote “nested hierarchy” interpret my meaning as “phylogenetic tree”. In other words a distribution of properties that forms a tree-diagram. What Darwin drew, in other words, and what the Linnaean taxonomy forms.
fifthmonarchyman: Skepticism says “I’m willing to explore other explanations if they arise”
Sure.
fifthmonarchyman: This is simply incorrect. In my everyday life I’m much more likely to incorrectly attribute the artifacts of design to “natural processes”.
People have attributed mountains, storms, rivers, the Sun, jewels, the planetary motions, to design.
A tree diagram can be made from a common design. The history of cars can form a tree diagram.
Yes it can. But whereas common design can produce both tree and non-tree like lineages, Darwinian evolution (at least if we confine ourselves to longitudinal inheritance vectors, as Darwin did, and which are by far the most dominant vectors in macro-cellular organisms), can’t produce non-tree-like lineages. So that is a limitation. So if life evolved, we’d expect to see that limitation manifest in the distribution of properties of organisms, and we do. Whereas, if a Designer periodically intervened, we might see frequent violations of the tree, for instance, the transfer of the excellent bird-lung pattern into mammals, who could well benefit from them, or a re-routing of the laryngeal nerve, at least for giraffes.
If you were to plot a phylogeny for cars (using an objective technique), you’d get a reasonable tree, but a lot of jumps between lineages. So often, one company gets a neat design idea, and then all the other companies tool up to get on the band-wagon. Also, patents tend to keep things tree-like until they expire, then it’s HGT all over the shop.
So the noticeable dearth of solution-swapping between lineages, i.e. the fact that the tree-structure is much deeper than would be expected by chance, or by the product of designers capable of imaginative leaps, idea-borrowing, and re-tooling, is strongly suggestive of evolutionary processes at work rather than the work of an active intervening Designer.
Also the complete absence tools, factories or even footprints.
However, the existence of a universe in which all this could happen, or, indeed, the existence of existence at all, may be an argument for a creator deity. It’s not one I find compelling though.
WE
So if I take my natural distribution to be different from yours then something may biased for you but not for me? Yet active information is a measure of bias. Whose bias?
Elizabeth- Darwinian evolution doesn’t have a mechanism capable of getting beyond populations of prokaryotes and that is given starting populations of prokaryotes. And guess what? Prokaryotes produce non-tree like patterns.
Also given the nature of gradual evolution we wouldn’t expect a tree- a bush, maybe- but not a tree.
Zac says,
People have attributed mountains, storms, rivers, the Sun, jewels, the planetary motions, to design.
I say,
Yes and knowing the proximate causes of those things does not invalidate that original attribution any more than knowing that the pebble tray shook invalidates our impression that the big pebbles are on top due to design.
Your problem is you somehow have mistaken the proximate causes of things with their ultimate cause.
Winston Ewert’s paper can help you to get past this sort of muddled thinking if you will simply allow yourself think about the implications.
peace
@ Joe # 193
It’s not the gradualness of evolution that would make it bush-like, but non-longitudinal inheritance mechanisms. And indeed, in bacteria, we do see lots of horizontal inheritance mechanisms, and indeed we see much more bushiness.
In sexually reproducing species, we have other ways of recombining our genetic material, and so even though most inheritance is down lineages, there is still lots of scope for variation.
And the issue of prokaryotes to eukaryotes is an interesting one – the best hypothesis, and one supported by quite a lot of evidence – is probably Margulis’s. But there are others (“membrane infolding”) for instance. Not that those are non-Darwinian – it’s just that they presuppose a specific mechanisms for a fairly major heritable change.
On the contrary, if we assume the Designer wanted to befuddle people like you, and give you rocks on which to stumble, we would expect that the Designer would occasionally arrange that animals would have odd things like laryngeal nerves that seemingly could use re-routing. (Although the Giraffe does just fine with the current configuration.) And guess what? That’s exactly what we find! How scientific such reasoning is!
mike 1962
Precisely so, Mike. Which is why we cannot conclude from what we observe that there was no Designer. The Designer hypothesis is consistent with absolutely any observation we could possibly make (if we stipulate that the Designer is omnipotent anyway).
Which is why of course, people make no such conclusions (or, if they do, why such a conclusion is not scientific).
All scientist conclude is that there are non-Design mechanisms that could do the job.
The problem I have with the ID movement is not their conclusion but their method of reasoning. I do not think you can infer a Designer from our observations, any more than we can infer not-a-Designer.
Liddle:
But this is exactly what we see in nature. We see flying mammals, swimming birds and mammals, walking fish, we see dolphins with similar echolocation systems as bats, we see different ocean species sharing common swimming mechanisms, etc. Lateral genes are such a problem for Darwinism that Darwinists have been piling up all sorts of non-explanations to wipe the egg off their faces. This is precisely why, lately, we hear so much silly pseudoscientific talk about convergent evolution.
The truth is that most of the LGTs occur early in the tree, which is precisely what we would expect from design.
On another tangent, what will you people do when long sequences and even entire genes are found to be identical in distant branches of the tree? Will you continue to plead convergence or will you come up with some other non-scientific, just-so story?
Elizabeth, The very nature of transitional forms would make it bush-like. Every population can be looked at like an asterisk as that is what pattern it has the potential to create.
Endosymbiosis is nothing more than “those eukaryotic organelles sure do look like they coulda been bacteria at one time”- that’s speculation, not science.
Those still need to be tested.
That is false. There is a reason not all rocks are artifacts and all deaths are not murders.
It’s the same reasoning used by archaeologists, forensic science and SETI and is based on our knowledge of cause and effect relationships. And all one has to do to refute it is demonstrate that mother nature is sufficient.
ID posits testable entailments.
EL says,
I do not think you can infer a Designer from our observations.
I say,
But you do infer design from our observations. You are hardwired to do so. That is not at issue it is a fact.
What you have is a preexisting design inference that you have chosen to discount for some reason. The only question is do you have warrant to do so.
You don’t come to to the design question from a neutral position. You can’t.
You start on the design side of the fence and therefore need compelling evidence to move to the nondesign side.
Do you have any?
peace
mapou
Those examples help the Darwinian story, not yours, I’m afraid, mapou. Flying mammals have wing structures in which the anatomical homologs are clearly mammalian, not bird-like. And dolphins and bats do indeed share genes that lend themselves to echo-locating functions – not surprisingling as they are quite closely related, so evolving similar functions from similar genetic material is not especially remarkable. What is far more remarkable is that when organisms from different lineages (e.g birds and mammals) adapt to a similar environment (marine), the same features are present, but with homologs relating to their own lineages, not each others. If this were not the case, computer-derived phylogenies wouldn’t consistently give a tree, with penguins at the end of one branch and seals at the end of another.
Not at all. Why should there be a problem? The fact that there are additional inheritance vectors does not falsify the mechanisms that were originally postulated. And certainly do not falsify Darwin’s principle of natural selection from variants – it’s just that we know know that there are non-longitudinal means of producing those variants.
Convergent evolution normally refers to organisms that reach similar macroscopic morphologies by means of very different anatomical adaptations, e.g. birds and bats; dolphins and fish. They don’t present a problem, because one look at the skeleton will tell you that they are from different lineages. But clearly, an environment that favours streamlining and flippers will tend to favour variants that are more streamlined and have more flipper-like limbs. You are finding problems where there are none.
If you want to find a problem with scientific accounts of biology, I suggest you focus on OOL, because we still don’t have a good account of that, and may never, although there are a lot of very suggestive leads.
Do you mean HGT? Because that’s where they are most abundant – at the root of the tree. And I don’t see whay it’s a prediction of Design. And we actually know a lot about how HGT happens.
Or perhaps you do mean LGT? In which case – sure, hybridisation occurs most often near branching points. But that’s absolutely obvious under Darwinian mechanisms. It’s not at all obvious under design – the reverse, I’d say, is true: it’s when products have gone quite a long way down the lineage that you start to get hybrids (iPhones, for instance, from computers + phones).
They already are, as you’d expect under common descent. Or do you mean “and absent from intervening branches”?
I don’t know, mapou – let me know when it’s been discovered, and I guess the scientists who discover it will tell us how they propose to investigate possibly mechanisms.
Bob O’H and DiEb:
The stochastic process defined by Dembski, Ewert, and Marks terminates with the selection of an element of the space Omega. Nature has not stopped to say, “Here it is — birds!” To suggest that Ewert thinks he has a model of biological evolution would be to insult his intelligence. That leaves us to ask why he and his editor have tossed around the term “conservation of information” at ENV. The theorem of DEM does not apply to the non-terminating process that has generated birds. I would allow that it applies to the process that ended with extinction of the dodos. But I can’t bring myself to regard the empty population as an example of biological complexity.
Elizabeth:
Yes, they have a common DESIGN.
Convergent evolution is just another “just-so” explanation. Dr Spetner lays the claim bare in “The Evolution Revolution”.
fifthmonarchyman says:
Let me rephrase as I was unclear: I do not think you can infer a Designer of biological organisms from our observations of biological organisms. I do not think the evidence supports such an inference. The evidence is perfectly consistent with it (because an omnipotent Designer could design things any way she wanted, including designing them so that they looked as though they had evolved) but to make a positive inference, you’d have to be able to test it specifically. And you can’t do that easily without being more specific about constraints on the putative Designer.
Now I am misunderstanding you. I don’t know what you mean. I am not on “the nondesign side”. I don’t know whether there was/is a designer or not. I don’t see any evidence for one, but then an omnipotent designer could choose not to leave evidence.
So we certainly can’t rule an omnipotent designer out. But nor can we conclude that there must be one.
No, they don’t have a common Design. Bat wings and bird wings are quite different designs. It’s if anything like one designer was asked to make a flying animal out of a small dinosaur, and another was asked to make a flying animal out of a mouse.
Which is exactly what you’d expect of a pair of animals so obviously related to dinosaurs and mice, respectively, in so many other respects.
EL says,
but to make a positive inference, you’d have to be able to test it specifically. And you can’t do that easily without being more specific about constraints on the putative Designer.
I say,
No, you start with a positive inference from your observations you then must suppress this notion. check it out
http://www.wsj.com/news/articl.....4046805070
You say,
I am not on “the nondesign side”. I don’t know whether there was/is a designer or not. I don’t see any evidence for one, but then an omnipotent designer could choose not to leave evidence.
I say
What I mean is you begin the game believing that what you see is the result of design and for some reason you abandoned that position for what you now think is a more neutral one.
You did not start life on the fence you are not a blank slate.
The question is did you have warrant for your change in perspective.
Do you have convincing evidence that life is not designed? Is such evidence even possible?
I think you have already acknowledged it’s not. So why the change?
peace
Elizabeth:
All mammals have a common design
Elizabeth:
That is why we also use other observations. If we could test unguided evolution you would have something. Yet it can’t even be modeled and offers no testable entailments.
I say
What I mean is you begin the game believing that what you see is the result of design and for some reason you abandoned that position for what you now think is a more neutral one.
I don’t really know what you are asking me. No, I’ve just said, I don’t have convincing evidence that life was not designed. If the putative designer can make life look not-designed, then there’s no way we can rule it out, just as we can’t rule out the possibility that the earth was created last Thursday with the appearance of great age.
I just don’t see any good arguments to infer Design from biology.
To take an analogy: I might be perfectly convinced that my son has gone out to see a movie, but I cannot infer that from the fact that his coat isn’t on the hook. It could be on the floor of his bedroom, or he could indeed be out, but at the pub.
It’s the inferential chain I am disputing, not the conclusion.
And it seems to me that Ewert, Dembski and Marks are themselves conceding that the universe might be perfectly capable of producing living things “naturally” given enough “Active Information” at Big Bang. Which would not be an argument from biology, but an argument from physics and chemistry.
Not a very good one, I have to say, but closer to a good one than inferring it from biology.
I’d say the biggest argument for a creator deity is the fact that anything exist at all: “why is there anything rather than nothing?”
But I don’t think it’s terribly watertight, even then. “Nothing” turns out to be a complicated matter when space itself is one of the Things that can be Noth.
Joe:
Yes it can and is. That you think it can’t be doesn’t make you correct.
EL says,
I just don’t see any good arguments to infer Design from biology.
I say,
You don’t need arguments. You are hardwired to infer design. You need arguments to justify your abandonment of this inference.
You say,
It’s the inferential chain I am disputing, not the conclusion.
I say,
There is no inferential chain you infer design from you observations in one step.
quote:
“Biology is the study of complex things that appear to have been designed for a purpose.”
end quote:
Richard Dawkins
I’m not sure why you are having such a hard time grasping this. You need evidence to support changing from the position that life is designed to one that you now feel is more neutral.
you say,
And it seems to me that Ewert, Dembski and Marks are themselves conceding that the universe might be perfectly capable of producing living things “naturally” given enough “Active Information” at Big Bang.
I say,
The key word is “might” we don’t abandon our hardwired impressions just because it’s possible they are mistaken. We need good reasons to do so.
It’s possible I might be a brain in a vat but I have seen no compelling evidence to abandon my hardwired impression that my body exists so I don’t.
The same approach should be sufficient when dealing with the hardwired design inferences we all make.
peace
fifthmonarchyman: Yes and knowing the proximate causes of those things does not invalidate that original attribution any more than knowing that the pebble tray shook invalidates our impression that the big pebbles are on top due to design.
If you want to make a non-scientific claim, then we have no objection. If you claim there is scientific evidence of design in weather or biology, then we disagree.
fifthmonarchyman:
I don’t know what this means, or why it would be relevant.
ETA: I also don’t agree with Dawkins’ definition of biology. It’s not that I don’t “grasp” it – I think it is incorrect. Biology is the study of living things. I don’t agree that living things have the appearance of being designed. I think they have the appearance of having been born to similar parents.
Elizabeth, I call your bluff. Please present these alleged models for UNGUIDED evolution. And after that please tells us about these alleged testable entailments for UNGUIDED evolution.
Zachriel:
Then present a viable alternative for biology.
Elizabeth:
You haven’t demonstrated that you have understood them. You don’t even appear to understand exactly what is being debated. And if you read comment 139 it appears that you don’t understand computers.
But birds are different, right?
Birds and bats share a common design also. It is on a different level than the common design shared by mammals. All animals share a common design on some level- at least one level. And that common design is elucidated by Linnaean taxonomy.
Arguing with a Darwinist about intelligent design is like arguing with a Jehovah’s witness about blood transfusion.
EL says,
I don’t know what this means, or why it would be relevant.
I say,
It means that science has demonstrated that we are hardwired to infer design when we observe certain things in nature. Ask a small child why the zebra is striped and she will assume that it was designed to be that way,
It’s relevant because your position demands we deny this inborn assumption and instead come at the design question from a neutral position. Yet you don’t demand the same for other hardwired inferences.
For example you don’t demand positive evidence before you grant that the materiel universe or your body exists. You tentatively accept these things until a better explanation for your impressions are given.
You say,
Biology is the study of living things. I don’t agree that living things have the appearance of being designed.
I say,
I’m not interested in present opinion I’m interested in how you can justify changing your mind.
At one time you did believe that life appeared designed . Everyone does. That what it means to say that this inference is hardwired. The universality of this impression has been confirmed scientifically.
What compelling evidence do you have for abandoning your natural belief?
peace
Zac said,
If you want to make a non-scientific claim, then we have no objection. If you claim there is scientific evidence of design in weather or biology, then we disagree.
I say,
The claim is that these things can not be produced algorithmically without the addition of active information. It does not matter whether you agree or not only if you can disprove the claim,
Several hundred comments are all the evidence I need that you can not.
peace
Mark Frank:
I’m sure the bias is all yours Mark.
Mark Frank and DiEb and Bob O’H:
The ID movement has a heavy investment in the terms “search,” “target,” “search for a search,” and “conservation of information,” going back at least to No Free Lunch (2002), and continuing through Being as Communion (2014). Ewert acknowledges now that a “search” doesn’t really search for the “target,” but sticks with the terms anyway. We can see that the change has yet to permeate his thinking, as he continues to refer to categorical success and failure in evolution:
This isn’t just careless language. It makes sense only if something really does seek to “hit the target.”
Ewert acknowledges that “active information” is a measure of bias, not information. But he continues to indicate otherwise by referring to “conservation of information.” He avoids speaking of the “search for a search,” though it is that to which the “conservation of information” theorem applies.
I’d like to hear what you have to say about improving terminology. The “target” is just an event. DiEb sometimes refers to a “search” as a guess of an element of Omega. I’m fine with that, but hardly anyone else is. I know it seems silly, but uninformed decision process might get a better reception, in part because it indicates that there’s a sequence of steps, and in part because it doesn’t come across as flippant. DEM’s process S does make sequential decisions on which elements of Omega to “inspect” (take data on), and Delta(S) is a final selection of one of the inspected elements.
The “search for a search” is just a mixture of uninformed decision processes, which is an uninformed decision process. The whys and wherefores are few and simple.
There is no need for the “conservation of information” theorem.
SimonLeberge:
The opening sentence from the abstract of A General Theory of Information Cost Incurred by Successful Search:
Further:
SimonLeberge:
And this will probably continue to be the case as long as targeted searches continue to be presented as proofs of evolutionary theory.
Elizabeth Liddle:
There are indeed all sorts of silly ways to talk about entropy, most of which are wrong. If you ask someone what Entropy is they won’t be able to tell you.
Elizabeth Liddle:
They are related because Shannon’s measure of information can be applied to any probability distribution.
However, there are many cases in which the entropy is undefined. That’s why they are different.
Simple and concise.
Mark Frank:
If the space Omega is countably infinite, then there definitely is no “natural” baseline distribution. DEM rule this out, but they shouldn’t. The most “natural” choice of a space of genotypes of organisms is countably infinite. Even if they argue for an upper bound on the size of a genotype, that doesn’t get them a particular distribution.
That’s my best biologically-relevant example.
S = k* ln W
(It’s actually on Uncle Ludwig’s grave . . . )
Mung (224-225):
Ewert has taken a big step away from DEM in his article at Evolution News and Views. You probably shouldn’t scour it for quotes. You might suffer the awful realization that I gave you correct explanations of the math before any of the ID theorists published them.
fifthmonarchman wrote:
I dispute your premise. I don’t think we are “hard-wired” to think that everything is designed. I think we are born with the capacity to infer intention, and that in the early years some children may over-generalise – which is typical of a lot of features of early child development – a child will, typically, learn the word for “dog” and then call all four-footed animals “dogs”. My son, interestingly, once asked me “how do tornados see to suck?” His default was to assume they were intentional agents. He was very relieved when I explained that they were inanimate.
But these intuitions are not universal.
But even if your premise was correct, there is no need to justify why erroneous assumptions, or defaults, we are “hard-wired” to entertain as children should not be replaced by evidence-based conclusions as we become mature enough to call our instinctive assumptions into question.
Mung:
Most people can tell me very precisely. But not all will give the same definition. That doesn’t matter, as long as they make it clear what they are talking about. I was talking about the flatness of the probability distribution of microstates (thermodynamics) or symbols(Shannon entropy), which is maximally flat when – sum p i * log p i is maximal.
fifthmonarchyman: The claim is that these things can not be produced algorithmically without the addition of active information. It does not matter whether you agree or not only if you can disprove the claim,
The genome can incorporate information about its relationship to the environment through evolution.
Are you claiming there is scientific evidence of design in weather?
SimonLeberge:
Too late! I already mocked his use of entropy in that article.
That said, if entropy is lumpiness and birds are lumpy then birds are entropy and entropy is for the birds.
Elizabeth Liddle:
That’s one way to define precision, I suppose.
Does entropy have mass and velocity?
1.) There is no such thing as Shannon entropy.
2.) Thermodynamics
Let me quote from Wikipedia:
Zachriel:
Where does “aboutness” come from?
EL said,
I dispute your premise.
I say,
It’s not a premise it’s a summary of the latest scientific findings on the subject
EL said,
I think we are born with the capacity to infer intention, and that in the early years some children may over-generalise
I say,
It’s not just children adults universally make the same inference. We all do. It’s how we are wired.
check it out
http://www.science20.com/write.....oke-139982
and
http://www.icea.ox.ac.uk/fileadmin/CAM/HADD.pdf
and
http://www.iep.utm.edu/theomind/
you say,
there is no need to justify why erroneous assumptions, or defaults, we are “hard-wired” to entertain as children should not be replaced by evidence-based conclusions as we become mature enough to call our instinctive assumptions into question.
I say,
I’m not saying that we should not question our instinctive assumptions as more evidence becomes available.
I’m saying that in order to be consistent we must have the same evidential standard for abandoning the design inference that we do for other innate assumptions.
In other words in order to be justified in ignoring the universal assumption of design you need compelling evidence.
Do you have such evidence?
Peace
Zac says,
Are you claiming there is scientific evidence of design in weather?
I say,
geez
no I’m claiming it is impossible to talk to you.
peace
Y’all have done a crazy amount of posting in my absence. There is no way I can keep up with this thread. But I’ll try to answer a few question.
Indeed, if we choose different natural distribution the active information of the same search could be very different for me and you. You might conclude the universe had near-zero active information, and I might conclude it had a lot of active information. However, either way we come back to the same conclusion: the original configuration of the universe either had a natural distribution which made my target probable or had a strong bias to make my target probable.
As I stated in my ENV article, COI does not give us a solid reason to infer design. A darwinist can (and should) accept the COI as true without rejecting Darwinism. It poses a problem only for a Darwinist who thinks that all that matters is selection, replication, and mutation, and the laws of physics don’t matter and could equally as well be anything.
Indeed, it would be rather bizarre reasoning, you’ll have to forgive my typo.
What’s the difference between entropy and active information?
Active information is a consequence of probability. It doesn’t assume anything about the laws of physics. This is useful for being able to make limited claims about universes that we know nothing about. As long as they operate according to a stochastic process, we can claim that they follow the conservation of information. We cannot make the same claim about entropy.
For example, consider a universe which has only one law: gravity. It can start with a very uniform distribution of particles. It thus starts with high-entropy. Over time, the particles are attracted to each other into a giant ball those losing the entropy and transitioning to low-entropy. If we take our target to be that ball, we have a very large amount of active information. But the central point is that COI still applies, even through entropy goes in reverse in this imaginary universe.
Another issues is that active information requires that the universe be biased towards some particular target. Low entropy merely requires that it be clumpy. In that way, active information is more specific. However, if I stick only with entropy, I can only look at the question in terms of the probabilities of states with similar entropy to birds.
As another example, consider the question of why the water on earth is predominately located in the oceans and isn’t uniformly distributed throughout the earth’s atmosphere. There is a high amount of active information in the target of having full oceans. There must something in the laws of the universe that make this happen. The answer is pretty obvious: gravity.
If I look at the same question from the perspective of entropy, what do we get? Certainly, having all the water in the ocean can be described as a low entropy state. Entropy tells us that this has to be paid for by increasing the entropy elsewhere.
So to summarize:
1) Entropy is a physical law, conservation of information is a consequence of the laws of probability.
2) Active information is concerned with particular targets, entropy is concerned with non-uniformity in general
3) Active information is concerned with the underlying laws that made an outcome probable, entropy is concerned with balancing out local decreases in entropy with increases elsewhere.
WE:
Indeed. Feel free to address yourself to those comments you feel are actually relevant. Most are not. I call this the Entropy theory of Blog comments. [They mostly clump around irrelevance.]
Many of us appreciate your comments here. At least some of us do our best not to misrepresent them.
Thanks to Winston Ewert for mixing with his critics. I hope the exercise will have been of some benefit. Thanks to Mark Frank, Simon Leberge, Elizabeth Liddle and DiEb for their contributions, which while critical, all managed to maintain a civil and workmanlike tone.
If only some of the Greek chorus here could take note…
Winston:
I’ve never actually met a “Darwinist” who thought any such thing. It would be a bizarre position.
But Winston, with respect, this seems a little disingenuous. For years, William Dembski appeared to be arguing that we could infer a Designer from the complexity (Specified Complexity) of biological organisms because Darwinian processes couldn’t produce them with adequate probability (in fact, he often said that Specified Complexity was closely related to Irreducible Complexity, which is presumably why that terrible rendering of a bacterial flagellum still heads this site’s page). Darwin has been in the sights of the ID movement for years, and Behe was front and centre at the Dover trial. Here is Dembski in “Specification: the Pattern that Signifies Intelligence”:
And while he did not say how to compute P(T|H) in a way that “takes into account Darwinian and other material mechanisms”, the fact that he chose this example suggests that he considered it small enough to pass the Specification test.
Now you are saying that the Dembski project, at least, presents no problem for Darwinian evolution (and I agree) unless that Darwinian theory posits that “the laws of physics don’t matter”. Darwin’s theory was always predicated, not just on the laws of physics, but specifically, on the existence of ancestral forms of life that reproduced with heritable variance in reproductive success.
If “the laws of physics could be anything”, then under most alternative scenarios, there could be no such ancestral forms of life. There could be no reproduction; there could be no mapping of similar genotypes on to similar phenotypes; there could be no heritable variance in reproductive success in the current environment. You statement simply tells us that our universe is life-friendly.
Which is simply not under dispute.
Thank you for the rest of the your post. Yes, I basically agree. And in fact, I suggest, that many of Dembski’s critics have been making this point for years – that what you have written there is all his case amounts to – namely a case against a straw man. Which was why I used to make lame jokes about the “eleP(T|H)ant in the Room”.
If the argument here is, essentially, “oh Darwinian evolution works just fine, but it requires as a prerequisite a universe in which the laws of physics and chemistry are something like the ones we observe in ours” any Design Inference does boils back down some kind of fine-tuning argument. Or a variant on Aquinas: “why is there low entropy structure rather than high-entropy mush?”
Which is a good question, and one with potential metaphysical implications, but one more aligned with the position of, say BioLogos than that of the Discovery Institute. Or indeed, of most modern theology.
If you have no problem with the idea that all the information of life is already present in the laws of physics, then fine.
However, you should realize that a little probability arithmetic will show that such incredibly fine-tuned laws inevitably point to a designer.
You do understand that it presents the problem: “where does the information come from”, right? IOW it does present a problem for “not-front-loaded unguided evolution”.
Elizabeth:
Elizabeth, no one knows how to calculate P(T|H) for most biological structures because evolutionary biologists cannot provide any numbers. They can’t provide any numbers because they have no idea- they can’t even model their claims.
That is the real “eleP(T|H)ant in the Room”, Lizzie. Your position can’t even say if there is a feasibility let alone provide any actual evidence or a probability.